Jul 15 04:39:03.118267 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 15 04:39:03.120026 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:39:03.120054 kernel: KASLR disabled due to lack of seed Jul 15 04:39:03.120071 kernel: efi: EFI v2.7 by EDK II Jul 15 04:39:03.120086 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 15 04:39:03.120101 kernel: secureboot: Secure boot disabled Jul 15 04:39:03.120118 kernel: ACPI: Early table checksum verification disabled Jul 15 04:39:03.120133 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 15 04:39:03.120149 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 04:39:03.120164 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 04:39:03.120179 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 15 04:39:03.120198 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 15 04:39:03.120213 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 04:39:03.120228 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 15 04:39:03.120246 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 15 04:39:03.120261 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 15 04:39:03.120282 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 04:39:03.120324 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 15 04:39:03.120342 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 15 04:39:03.120359 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 15 04:39:03.120375 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 15 04:39:03.120391 kernel: printk: legacy bootconsole [uart0] enabled Jul 15 04:39:03.120406 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:39:03.120423 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 04:39:03.120439 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Jul 15 04:39:03.120455 kernel: Zone ranges: Jul 15 04:39:03.120470 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 15 04:39:03.120492 kernel: DMA32 empty Jul 15 04:39:03.120508 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 15 04:39:03.120523 kernel: Device empty Jul 15 04:39:03.120539 kernel: Movable zone start for each node Jul 15 04:39:03.120554 kernel: Early memory node ranges Jul 15 04:39:03.120570 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 15 04:39:03.120586 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 15 04:39:03.120601 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 15 04:39:03.120617 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 15 04:39:03.120632 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 15 04:39:03.120648 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 15 04:39:03.120664 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 15 04:39:03.120683 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 15 04:39:03.120706 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 04:39:03.120723 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 15 04:39:03.120740 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jul 15 04:39:03.120757 kernel: psci: probing for conduit method from ACPI. Jul 15 04:39:03.120777 kernel: psci: PSCIv1.0 detected in firmware. Jul 15 04:39:03.120794 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:39:03.120811 kernel: psci: Trusted OS migration not required Jul 15 04:39:03.120827 kernel: psci: SMC Calling Convention v1.1 Jul 15 04:39:03.120844 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 15 04:39:03.120861 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:39:03.120878 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:39:03.120895 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 04:39:03.120911 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:39:03.120928 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:39:03.120944 kernel: CPU features: detected: Spectre-v2 Jul 15 04:39:03.120965 kernel: CPU features: detected: Spectre-v3a Jul 15 04:39:03.120982 kernel: CPU features: detected: Spectre-BHB Jul 15 04:39:03.120998 kernel: CPU features: detected: ARM erratum 1742098 Jul 15 04:39:03.121015 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 15 04:39:03.121031 kernel: alternatives: applying boot alternatives Jul 15 04:39:03.121050 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:39:03.121068 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:39:03.121085 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:39:03.121102 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:39:03.121119 kernel: Fallback order for Node 0: 0 Jul 15 04:39:03.121139 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 15 04:39:03.121156 kernel: Policy zone: Normal Jul 15 04:39:03.121173 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:39:03.121189 kernel: software IO TLB: area num 2. Jul 15 04:39:03.121206 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Jul 15 04:39:03.121223 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 04:39:03.121239 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:39:03.121257 kernel: rcu: RCU event tracing is enabled. Jul 15 04:39:03.121274 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 04:39:03.121313 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:39:03.121333 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:39:03.121350 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:39:03.121372 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 04:39:03.121389 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:39:03.121406 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:39:03.121423 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:39:03.121439 kernel: GICv3: 96 SPIs implemented Jul 15 04:39:03.121456 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:39:03.121472 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:39:03.121489 kernel: GICv3: GICv3 features: 16 PPIs Jul 15 04:39:03.121505 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 04:39:03.121522 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 15 04:39:03.121539 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 15 04:39:03.121555 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 15 04:39:03.121576 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 15 04:39:03.121593 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 15 04:39:03.121610 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 15 04:39:03.121626 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 15 04:39:03.121643 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:39:03.121660 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 15 04:39:03.121677 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 15 04:39:03.121694 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 15 04:39:03.121711 kernel: Console: colour dummy device 80x25 Jul 15 04:39:03.121729 kernel: printk: legacy console [tty1] enabled Jul 15 04:39:03.121746 kernel: ACPI: Core revision 20240827 Jul 15 04:39:03.121767 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 15 04:39:03.121785 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:39:03.121802 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:39:03.121819 kernel: landlock: Up and running. Jul 15 04:39:03.121836 kernel: SELinux: Initializing. Jul 15 04:39:03.121853 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:39:03.121870 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:39:03.121888 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:39:03.121905 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:39:03.121926 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:39:03.121943 kernel: Remapping and enabling EFI services. Jul 15 04:39:03.121959 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:39:03.121976 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:39:03.121994 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 15 04:39:03.122011 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 15 04:39:03.122028 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 15 04:39:03.122045 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 04:39:03.122062 kernel: SMP: Total of 2 processors activated. Jul 15 04:39:03.122091 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:39:03.122109 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:39:03.122130 kernel: CPU features: detected: 32-bit EL1 Support Jul 15 04:39:03.122148 kernel: CPU features: detected: CRC32 instructions Jul 15 04:39:03.122166 kernel: alternatives: applying system-wide alternatives Jul 15 04:39:03.122184 kernel: Memory: 3796580K/4030464K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 212536K reserved, 16384K cma-reserved) Jul 15 04:39:03.122202 kernel: devtmpfs: initialized Jul 15 04:39:03.122224 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:39:03.122242 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 04:39:03.122260 kernel: 16928 pages in range for non-PLT usage Jul 15 04:39:03.122278 kernel: 508448 pages in range for PLT usage Jul 15 04:39:03.122328 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:39:03.122348 kernel: SMBIOS 3.0.0 present. Jul 15 04:39:03.122366 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 15 04:39:03.122384 kernel: DMI: Memory slots populated: 0/0 Jul 15 04:39:03.122402 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:39:03.122426 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:39:03.122444 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:39:03.122462 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:39:03.122480 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:39:03.122498 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jul 15 04:39:03.122515 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:39:03.122533 kernel: cpuidle: using governor menu Jul 15 04:39:03.122551 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:39:03.122568 kernel: ASID allocator initialised with 65536 entries Jul 15 04:39:03.122590 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:39:03.122608 kernel: Serial: AMBA PL011 UART driver Jul 15 04:39:03.122626 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:39:03.122644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:39:03.122662 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:39:03.122680 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:39:03.122698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:39:03.122715 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:39:03.122733 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:39:03.122755 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:39:03.122773 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:39:03.122790 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:39:03.122808 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:39:03.122826 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:39:03.122844 kernel: ACPI: Interpreter enabled Jul 15 04:39:03.122862 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:39:03.122879 kernel: ACPI: MCFG table detected, 1 entries Jul 15 04:39:03.122897 kernel: ACPI: CPU0 has been hot-added Jul 15 04:39:03.122918 kernel: ACPI: CPU1 has been hot-added Jul 15 04:39:03.122937 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 15 04:39:03.123226 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 04:39:03.123455 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 04:39:03.123641 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 04:39:03.123821 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 15 04:39:03.124021 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 15 04:39:03.124053 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 15 04:39:03.124072 kernel: acpiphp: Slot [1] registered Jul 15 04:39:03.124090 kernel: acpiphp: Slot [2] registered Jul 15 04:39:03.124108 kernel: acpiphp: Slot [3] registered Jul 15 04:39:03.124126 kernel: acpiphp: Slot [4] registered Jul 15 04:39:03.124143 kernel: acpiphp: Slot [5] registered Jul 15 04:39:03.124161 kernel: acpiphp: Slot [6] registered Jul 15 04:39:03.124178 kernel: acpiphp: Slot [7] registered Jul 15 04:39:03.124196 kernel: acpiphp: Slot [8] registered Jul 15 04:39:03.124213 kernel: acpiphp: Slot [9] registered Jul 15 04:39:03.124235 kernel: acpiphp: Slot [10] registered Jul 15 04:39:03.124253 kernel: acpiphp: Slot [11] registered Jul 15 04:39:03.124271 kernel: acpiphp: Slot [12] registered Jul 15 04:39:03.124320 kernel: acpiphp: Slot [13] registered Jul 15 04:39:03.124839 kernel: acpiphp: Slot [14] registered Jul 15 04:39:03.124859 kernel: acpiphp: Slot [15] registered Jul 15 04:39:03.124877 kernel: acpiphp: Slot [16] registered Jul 15 04:39:03.124895 kernel: acpiphp: Slot [17] registered Jul 15 04:39:03.124913 kernel: acpiphp: Slot [18] registered Jul 15 04:39:03.124938 kernel: acpiphp: Slot [19] registered Jul 15 04:39:03.124956 kernel: acpiphp: Slot [20] registered Jul 15 04:39:03.124973 kernel: acpiphp: Slot [21] registered Jul 15 04:39:03.124991 kernel: acpiphp: Slot [22] registered Jul 15 04:39:03.125009 kernel: acpiphp: Slot [23] registered Jul 15 04:39:03.125026 kernel: acpiphp: Slot [24] registered Jul 15 04:39:03.125044 kernel: acpiphp: Slot [25] registered Jul 15 04:39:03.125062 kernel: acpiphp: Slot [26] registered Jul 15 04:39:03.125080 kernel: acpiphp: Slot [27] registered Jul 15 04:39:03.125097 kernel: acpiphp: Slot [28] registered Jul 15 04:39:03.125119 kernel: acpiphp: Slot [29] registered Jul 15 04:39:03.125137 kernel: acpiphp: Slot [30] registered Jul 15 04:39:03.125154 kernel: acpiphp: Slot [31] registered Jul 15 04:39:03.125172 kernel: PCI host bridge to bus 0000:00 Jul 15 04:39:03.125420 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 15 04:39:03.125600 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 04:39:03.125767 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 15 04:39:03.125934 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 15 04:39:03.126169 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 15 04:39:03.126421 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 15 04:39:03.126621 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 15 04:39:03.126824 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 15 04:39:03.127018 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 15 04:39:03.127213 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 04:39:03.127501 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 15 04:39:03.127701 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 15 04:39:03.127893 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 15 04:39:03.128113 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 15 04:39:03.128843 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 04:39:03.129056 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 15 04:39:03.129248 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 15 04:39:03.129477 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 15 04:39:03.129670 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 15 04:39:03.129863 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 15 04:39:03.130040 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 15 04:39:03.130335 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 04:39:03.130512 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 15 04:39:03.130538 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 04:39:03.130566 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 04:39:03.130584 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 04:39:03.130602 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 04:39:03.130620 kernel: iommu: Default domain type: Translated Jul 15 04:39:03.130639 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:39:03.130656 kernel: efivars: Registered efivars operations Jul 15 04:39:03.130674 kernel: vgaarb: loaded Jul 15 04:39:03.130761 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:39:03.130784 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:39:03.130808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:39:03.130826 kernel: pnp: PnP ACPI init Jul 15 04:39:03.131045 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 15 04:39:03.131072 kernel: pnp: PnP ACPI: found 1 devices Jul 15 04:39:03.131090 kernel: NET: Registered PF_INET protocol family Jul 15 04:39:03.131108 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:39:03.131126 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:39:03.131145 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:39:03.131169 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:39:03.131188 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:39:03.131206 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:39:03.131224 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:39:03.131242 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:39:03.131260 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:39:03.131278 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:39:03.131329 kernel: kvm [1]: HYP mode not available Jul 15 04:39:03.131348 kernel: Initialise system trusted keyrings Jul 15 04:39:03.131373 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:39:03.131391 kernel: Key type asymmetric registered Jul 15 04:39:03.131408 kernel: Asymmetric key parser 'x509' registered Jul 15 04:39:03.131426 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:39:03.131444 kernel: io scheduler mq-deadline registered Jul 15 04:39:03.131462 kernel: io scheduler kyber registered Jul 15 04:39:03.131480 kernel: io scheduler bfq registered Jul 15 04:39:03.131681 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 15 04:39:03.131711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 04:39:03.131730 kernel: ACPI: button: Power Button [PWRB] Jul 15 04:39:03.131748 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 15 04:39:03.131766 kernel: ACPI: button: Sleep Button [SLPB] Jul 15 04:39:03.131784 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:39:03.131803 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 15 04:39:03.132007 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 15 04:39:03.132035 kernel: printk: legacy console [ttyS0] disabled Jul 15 04:39:03.132054 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 15 04:39:03.132078 kernel: printk: legacy console [ttyS0] enabled Jul 15 04:39:03.132096 kernel: printk: legacy bootconsole [uart0] disabled Jul 15 04:39:03.132114 kernel: thunder_xcv, ver 1.0 Jul 15 04:39:03.132131 kernel: thunder_bgx, ver 1.0 Jul 15 04:39:03.132149 kernel: nicpf, ver 1.0 Jul 15 04:39:03.132167 kernel: nicvf, ver 1.0 Jul 15 04:39:03.132396 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:39:03.132575 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:39:02 UTC (1752554342) Jul 15 04:39:03.132605 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:39:03.132624 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 15 04:39:03.132642 kernel: watchdog: NMI not fully supported Jul 15 04:39:03.132659 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:39:03.132677 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:39:03.132695 kernel: Segment Routing with IPv6 Jul 15 04:39:03.132712 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:39:03.132730 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:39:03.132748 kernel: Key type dns_resolver registered Jul 15 04:39:03.132769 kernel: registered taskstats version 1 Jul 15 04:39:03.132787 kernel: Loading compiled-in X.509 certificates Jul 15 04:39:03.132805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:39:03.132823 kernel: Demotion targets for Node 0: null Jul 15 04:39:03.132841 kernel: Key type .fscrypt registered Jul 15 04:39:03.132858 kernel: Key type fscrypt-provisioning registered Jul 15 04:39:03.132876 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:39:03.132893 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:39:03.132911 kernel: ima: No architecture policies found Jul 15 04:39:03.132933 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:39:03.132951 kernel: clk: Disabling unused clocks Jul 15 04:39:03.132968 kernel: PM: genpd: Disabling unused power domains Jul 15 04:39:03.132986 kernel: Warning: unable to open an initial console. Jul 15 04:39:03.133004 kernel: Freeing unused kernel memory: 39424K Jul 15 04:39:03.133021 kernel: Run /init as init process Jul 15 04:39:03.133039 kernel: with arguments: Jul 15 04:39:03.133057 kernel: /init Jul 15 04:39:03.133074 kernel: with environment: Jul 15 04:39:03.133091 kernel: HOME=/ Jul 15 04:39:03.133113 kernel: TERM=linux Jul 15 04:39:03.133130 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:39:03.133149 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:39:03.133173 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:39:03.133193 systemd[1]: Detected virtualization amazon. Jul 15 04:39:03.133212 systemd[1]: Detected architecture arm64. Jul 15 04:39:03.133231 systemd[1]: Running in initrd. Jul 15 04:39:03.133253 systemd[1]: No hostname configured, using default hostname. Jul 15 04:39:03.133273 systemd[1]: Hostname set to . Jul 15 04:39:03.133312 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:39:03.133334 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:39:03.133353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:03.133373 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:03.133393 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:39:03.133413 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:39:03.133438 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:39:03.133460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:39:03.133481 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:39:03.133501 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:39:03.133520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:03.133540 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:03.133559 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:39:03.133582 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:39:03.133601 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:39:03.133620 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:39:03.133640 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:39:03.133659 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:39:03.133679 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:39:03.133698 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:39:03.133717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:03.133740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:03.133760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:03.133779 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:39:03.133798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:39:03.133818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:39:03.133837 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:39:03.133857 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:39:03.133877 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:39:03.133896 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:39:03.133920 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:39:03.133940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:03.133960 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:39:03.133980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:03.134004 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:39:03.134024 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:39:03.134044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:03.134064 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:39:03.134123 systemd-journald[257]: Collecting audit messages is disabled. Jul 15 04:39:03.134182 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:39:03.134205 systemd-journald[257]: Journal started Jul 15 04:39:03.134242 systemd-journald[257]: Runtime Journal (/run/log/journal/ec278dc9b81fa87f9c3898a80dfaa184) is 8M, max 75.3M, 67.3M free. Jul 15 04:39:03.093058 systemd-modules-load[258]: Inserted module 'overlay' Jul 15 04:39:03.138313 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:39:03.142140 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:39:03.148320 kernel: Bridge firewalling registered Jul 15 04:39:03.148393 systemd-modules-load[258]: Inserted module 'br_netfilter' Jul 15 04:39:03.153833 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:03.164647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:39:03.182181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:03.191699 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:39:03.197974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:39:03.210115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:03.226228 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:39:03.234960 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:39:03.238969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:03.255675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:39:03.267480 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:03.297282 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:39:03.355901 systemd-resolved[298]: Positive Trust Anchors: Jul 15 04:39:03.355938 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:39:03.356017 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:39:03.473321 kernel: SCSI subsystem initialized Jul 15 04:39:03.481327 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:39:03.493330 kernel: iscsi: registered transport (tcp) Jul 15 04:39:03.515407 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:39:03.515479 kernel: QLogic iSCSI HBA Driver Jul 15 04:39:03.549462 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:39:03.575870 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:03.588070 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:39:03.617354 kernel: random: crng init done Jul 15 04:39:03.617589 systemd-resolved[298]: Defaulting to hostname 'linux'. Jul 15 04:39:03.621145 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:39:03.623675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:03.680324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:39:03.686722 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:39:03.770338 kernel: raid6: neonx8 gen() 6547 MB/s Jul 15 04:39:03.786321 kernel: raid6: neonx4 gen() 6566 MB/s Jul 15 04:39:03.803320 kernel: raid6: neonx2 gen() 5453 MB/s Jul 15 04:39:03.820321 kernel: raid6: neonx1 gen() 3956 MB/s Jul 15 04:39:03.837320 kernel: raid6: int64x8 gen() 3667 MB/s Jul 15 04:39:03.854320 kernel: raid6: int64x4 gen() 3710 MB/s Jul 15 04:39:03.871320 kernel: raid6: int64x2 gen() 3610 MB/s Jul 15 04:39:03.889598 kernel: raid6: int64x1 gen() 2761 MB/s Jul 15 04:39:03.889629 kernel: raid6: using algorithm neonx4 gen() 6566 MB/s Jul 15 04:39:03.908400 kernel: raid6: .... xor() 4642 MB/s, rmw enabled Jul 15 04:39:03.908445 kernel: raid6: using neon recovery algorithm Jul 15 04:39:03.917140 kernel: xor: measuring software checksum speed Jul 15 04:39:03.917195 kernel: 8regs : 12288 MB/sec Jul 15 04:39:03.918320 kernel: 32regs : 12030 MB/sec Jul 15 04:39:03.920632 kernel: arm64_neon : 8744 MB/sec Jul 15 04:39:03.920664 kernel: xor: using function: 8regs (12288 MB/sec) Jul 15 04:39:04.012549 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:39:04.022968 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:39:04.029842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:04.078711 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jul 15 04:39:04.088883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:04.104488 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:39:04.142026 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Jul 15 04:39:04.187050 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:39:04.197526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:39:04.340350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:04.353668 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:39:04.514696 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 04:39:04.514768 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 15 04:39:04.514794 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 15 04:39:04.515067 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 04:39:04.524338 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 04:39:04.524661 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 04:39:04.529338 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 04:39:04.540337 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:70:fe:c6:23:6b Jul 15 04:39:04.540665 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 04:39:04.544315 kernel: GPT:9289727 != 16777215 Jul 15 04:39:04.544373 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 04:39:04.544398 kernel: GPT:9289727 != 16777215 Jul 15 04:39:04.544422 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 04:39:04.545738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:39:04.549554 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 04:39:04.546376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:04.551672 (udev-worker)[561]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:39:04.562647 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:04.572855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:04.581339 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:04.611711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:04.620401 kernel: nvme nvme0: using unchecked data buffer Jul 15 04:39:04.723454 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 04:39:04.814328 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 04:39:04.836159 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:39:04.861951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 04:39:04.883196 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 04:39:04.886215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 04:39:04.893324 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:39:04.896767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:04.906025 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:39:04.914900 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:39:04.921151 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:39:04.950524 disk-uuid[689]: Primary Header is updated. Jul 15 04:39:04.950524 disk-uuid[689]: Secondary Entries is updated. Jul 15 04:39:04.950524 disk-uuid[689]: Secondary Header is updated. Jul 15 04:39:04.964521 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 04:39:04.972799 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:39:05.982395 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 04:39:05.984333 disk-uuid[690]: The operation has completed successfully. Jul 15 04:39:06.156348 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:39:06.158511 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:39:06.254833 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:39:06.289660 sh[957]: Success Jul 15 04:39:06.321153 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:39:06.321227 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:39:06.322228 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:39:06.335323 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:39:06.446232 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:39:06.453472 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:39:06.469868 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:39:06.493323 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:39:06.496465 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (980) Jul 15 04:39:06.501001 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:39:06.501046 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:06.502241 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:39:06.578151 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:39:06.582681 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:39:06.586338 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:39:06.588161 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:39:06.599520 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:39:06.654920 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Jul 15 04:39:06.660083 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:06.660147 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:06.661833 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 04:39:06.686331 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:06.689798 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:39:06.697136 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:39:06.784621 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:39:06.793833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:39:06.876151 systemd-networkd[1149]: lo: Link UP Jul 15 04:39:06.876173 systemd-networkd[1149]: lo: Gained carrier Jul 15 04:39:06.881989 systemd-networkd[1149]: Enumeration completed Jul 15 04:39:06.883752 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:39:06.885123 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:06.885130 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:39:06.896038 systemd[1]: Reached target network.target - Network. Jul 15 04:39:06.907032 systemd-networkd[1149]: eth0: Link UP Jul 15 04:39:06.907052 systemd-networkd[1149]: eth0: Gained carrier Jul 15 04:39:06.907073 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:06.923359 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.22.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 04:39:07.159561 ignition[1080]: Ignition 2.21.0 Jul 15 04:39:07.159593 ignition[1080]: Stage: fetch-offline Jul 15 04:39:07.162997 ignition[1080]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:07.163034 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:39:07.167769 ignition[1080]: Ignition finished successfully Jul 15 04:39:07.171979 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:39:07.176280 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 04:39:07.232260 ignition[1162]: Ignition 2.21.0 Jul 15 04:39:07.232321 ignition[1162]: Stage: fetch Jul 15 04:39:07.234003 ignition[1162]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:07.234034 ignition[1162]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:39:07.235110 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:39:07.254619 ignition[1162]: PUT result: OK Jul 15 04:39:07.258312 ignition[1162]: parsed url from cmdline: "" Jul 15 04:39:07.258333 ignition[1162]: no config URL provided Jul 15 04:39:07.258349 ignition[1162]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:39:07.258374 ignition[1162]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:39:07.258408 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:39:07.262972 ignition[1162]: PUT result: OK Jul 15 04:39:07.263073 ignition[1162]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 04:39:07.268144 ignition[1162]: GET result: OK Jul 15 04:39:07.268626 ignition[1162]: parsing config with SHA512: 1eaea6c4d9f36de72c032d283e786ad0a54936b3a4e00222a963e3eecb1eeca5d24bb3dce127b4f5c6105f133505e8761933144281e704a213113ef7e104d640 Jul 15 04:39:07.284557 unknown[1162]: fetched base config from "system" Jul 15 04:39:07.285063 unknown[1162]: fetched base config from "system" Jul 15 04:39:07.285804 ignition[1162]: fetch: fetch complete Jul 15 04:39:07.285077 unknown[1162]: fetched user config from "aws" Jul 15 04:39:07.285817 ignition[1162]: fetch: fetch passed Jul 15 04:39:07.285906 ignition[1162]: Ignition finished successfully Jul 15 04:39:07.298927 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 04:39:07.304106 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:39:07.348061 ignition[1168]: Ignition 2.21.0 Jul 15 04:39:07.348614 ignition[1168]: Stage: kargs Jul 15 04:39:07.349185 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:07.349209 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:39:07.349401 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:39:07.358591 ignition[1168]: PUT result: OK Jul 15 04:39:07.363149 ignition[1168]: kargs: kargs passed Jul 15 04:39:07.363250 ignition[1168]: Ignition finished successfully Jul 15 04:39:07.368626 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:39:07.375971 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:39:07.431512 ignition[1174]: Ignition 2.21.0 Jul 15 04:39:07.432043 ignition[1174]: Stage: disks Jul 15 04:39:07.432645 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:07.432668 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:39:07.432835 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:39:07.436852 ignition[1174]: PUT result: OK Jul 15 04:39:07.446637 ignition[1174]: disks: disks passed Jul 15 04:39:07.446732 ignition[1174]: Ignition finished successfully Jul 15 04:39:07.449245 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:39:07.455423 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:39:07.460422 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:39:07.465503 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:39:07.467777 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:39:07.471843 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:39:07.479310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:39:07.539546 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 04:39:07.545071 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:39:07.553870 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:39:07.684334 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:39:07.685542 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:39:07.689748 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:39:07.696821 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:39:07.700863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:39:07.710949 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 04:39:07.713241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:39:07.715605 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:39:07.736726 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:39:07.743242 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:39:07.765338 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jul 15 04:39:07.770571 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:07.770634 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:07.772154 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 04:39:07.780558 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:39:08.066150 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:39:08.076252 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:39:08.085407 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:39:08.093475 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:39:08.311644 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:39:08.318852 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:39:08.322361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:39:08.357990 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:08.358408 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:39:08.399631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:39:08.405885 ignition[1313]: INFO : Ignition 2.21.0 Jul 15 04:39:08.405885 ignition[1313]: INFO : Stage: mount Jul 15 04:39:08.405885 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:08.405885 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:39:08.405885 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:39:08.418762 ignition[1313]: INFO : PUT result: OK Jul 15 04:39:08.422900 ignition[1313]: INFO : mount: mount passed Jul 15 04:39:08.426847 ignition[1313]: INFO : Ignition finished successfully Jul 15 04:39:08.425636 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:39:08.435168 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:39:08.537482 systemd-networkd[1149]: eth0: Gained IPv6LL Jul 15 04:39:08.689644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:39:08.738353 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Jul 15 04:39:08.742701 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:39:08.742743 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:39:08.742769 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 04:39:08.754548 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:39:08.807335 ignition[1343]: INFO : Ignition 2.21.0 Jul 15 04:39:08.807335 ignition[1343]: INFO : Stage: files Jul 15 04:39:08.811757 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:08.811757 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:39:08.811757 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:39:08.819317 ignition[1343]: INFO : PUT result: OK Jul 15 04:39:08.823739 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:39:08.828724 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:39:08.828724 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:39:08.838239 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:39:08.841308 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:39:08.841308 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:39:08.839383 unknown[1343]: wrote ssh authorized keys file for user: core Jul 15 04:39:08.850163 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:39:08.850163 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 15 04:39:08.938958 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:39:09.091983 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:39:09.098048 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:39:09.128754 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:39:09.128754 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:39:09.128754 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:09.128754 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:09.128754 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:09.128754 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 15 04:39:09.696714 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 04:39:10.076828 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:39:10.082065 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 04:39:10.082065 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:39:10.089231 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:39:10.089231 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 04:39:10.089231 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:39:10.089231 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:39:10.089231 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:39:10.089231 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:39:10.089231 ignition[1343]: INFO : files: files passed Jul 15 04:39:10.089231 ignition[1343]: INFO : Ignition finished successfully Jul 15 04:39:10.109339 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:39:10.114604 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:39:10.131079 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:39:10.143451 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:39:10.145607 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:39:10.165186 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:10.165186 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:10.174216 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:39:10.180827 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:39:10.186927 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:39:10.194503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:39:10.289132 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:39:10.290233 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:39:10.297454 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:39:10.302419 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:39:10.305143 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:39:10.311548 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:39:10.364349 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:39:10.370517 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:39:10.413484 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:39:10.413746 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:39:10.421574 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:10.424502 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:10.432260 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:39:10.436246 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:39:10.436395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:39:10.443643 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:39:10.445913 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:39:10.451792 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:39:10.454450 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:39:10.457510 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:39:10.464553 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:39:10.467431 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:39:10.473967 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:39:10.477238 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:39:10.483879 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:39:10.486909 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:39:10.492599 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:39:10.492703 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:39:10.499577 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:10.506374 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:10.508932 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:39:10.509427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:10.516519 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:39:10.516613 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:39:10.522337 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:39:10.522423 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:39:10.529611 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:39:10.529691 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:39:10.538312 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:39:10.549435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:39:10.551518 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:39:10.551620 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:10.554505 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:39:10.554593 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:39:10.600517 ignition[1396]: INFO : Ignition 2.21.0 Jul 15 04:39:10.600517 ignition[1396]: INFO : Stage: umount Jul 15 04:39:10.604576 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:39:10.604576 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:39:10.604576 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:39:10.613027 ignition[1396]: INFO : PUT result: OK Jul 15 04:39:10.617447 ignition[1396]: INFO : umount: umount passed Jul 15 04:39:10.624015 ignition[1396]: INFO : Ignition finished successfully Jul 15 04:39:10.617478 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:39:10.632238 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:39:10.634403 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:39:10.639172 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:39:10.639378 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:39:10.645711 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:39:10.645815 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:39:10.648532 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 04:39:10.648610 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 04:39:10.655757 systemd[1]: Stopped target network.target - Network. Jul 15 04:39:10.658009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:39:10.658103 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:39:10.662263 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:39:10.666760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:39:10.666901 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:10.670914 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:39:10.675987 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:39:10.679662 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:39:10.679738 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:39:10.683908 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:39:10.683993 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:39:10.689378 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:39:10.689474 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:39:10.693363 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:39:10.693439 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:39:10.697441 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:39:10.701444 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:39:10.708661 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:39:10.708832 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:39:10.716242 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:39:10.716428 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:39:10.745952 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:39:10.746166 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:39:10.754383 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:39:10.754775 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:39:10.754963 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:39:10.769987 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:39:10.772940 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:39:10.775787 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:39:10.775860 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:10.777704 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:39:10.778239 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:39:10.778349 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:39:10.778699 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:39:10.778769 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:10.794181 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:39:10.794316 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:10.807167 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:39:10.807265 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:10.816600 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:10.831959 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:39:10.832187 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:10.860117 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:39:10.861821 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:10.870553 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:39:10.871601 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:10.878227 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:39:10.878329 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:10.880853 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:39:10.880942 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:39:10.886023 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:39:10.886109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:39:10.896102 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:39:10.896200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:39:10.904924 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:39:10.911227 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:39:10.911383 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:10.919476 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:39:10.919674 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:10.922692 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 04:39:10.922788 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:10.937081 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:39:10.937320 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:10.945363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:39:10.945455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:10.952265 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 04:39:10.952656 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 04:39:10.952739 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 04:39:10.952823 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:39:10.955818 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:39:10.956042 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:39:10.973049 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:39:10.974011 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:39:10.983839 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:39:10.998639 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:39:11.025087 systemd[1]: Switching root. Jul 15 04:39:11.063215 systemd-journald[257]: Journal stopped Jul 15 04:39:13.158771 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Jul 15 04:39:13.158878 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:39:13.158925 kernel: SELinux: policy capability open_perms=1 Jul 15 04:39:13.158955 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:39:13.158984 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:39:13.159021 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:39:13.159051 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:39:13.159085 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:39:13.159114 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:39:13.159141 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:39:13.159171 kernel: audit: type=1403 audit(1752554351.448:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:39:13.159207 systemd[1]: Successfully loaded SELinux policy in 88.626ms. Jul 15 04:39:13.159255 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.872ms. Jul 15 04:39:13.159303 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:39:13.159367 systemd[1]: Detected virtualization amazon. Jul 15 04:39:13.159401 systemd[1]: Detected architecture arm64. Jul 15 04:39:13.159434 systemd[1]: Detected first boot. Jul 15 04:39:13.159466 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:39:13.159497 zram_generator::config[1443]: No configuration found. Jul 15 04:39:13.159529 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:39:13.159559 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:39:13.159591 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:39:13.159619 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:39:13.159653 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:39:13.159680 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:39:13.159714 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:39:13.159744 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:39:13.159775 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:39:13.159804 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:39:13.159834 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:39:13.159862 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:39:13.159892 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:39:13.159945 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:39:13.159982 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:39:13.160013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:39:13.160041 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:39:13.160069 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:39:13.160097 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:39:13.160129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:39:13.160157 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 04:39:13.160190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:39:13.160222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:39:13.160250 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:39:13.160317 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:39:13.160357 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:39:13.160385 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:39:13.160416 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:39:13.160447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:39:13.160476 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:39:13.160507 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:39:13.160540 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:39:13.160570 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:39:13.160600 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:39:13.160628 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:39:13.160657 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:39:13.160685 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:39:13.160714 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:39:13.160745 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:39:13.160772 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:39:13.160805 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:39:13.160833 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:39:13.160861 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:39:13.160891 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:39:13.160922 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:39:13.160951 systemd[1]: Reached target machines.target - Containers. Jul 15 04:39:13.160982 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:39:13.161010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:13.161041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:39:13.161069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:39:13.161099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:13.161126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:39:13.161154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:13.161181 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:39:13.161211 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:13.161239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:39:13.161267 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:39:13.161346 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:39:13.161382 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:39:13.161410 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:39:13.166408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:13.166448 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:39:13.166477 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:39:13.166505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:39:13.166535 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:39:13.166565 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:39:13.166601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:39:13.166635 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:39:13.166662 systemd[1]: Stopped verity-setup.service. Jul 15 04:39:13.166690 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:39:13.166720 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:39:13.166748 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:39:13.166776 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:39:13.166804 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:39:13.166831 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:39:13.166864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:39:13.166897 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:39:13.166925 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:39:13.166953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:13.166980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:13.167008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:13.167036 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:13.167063 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:39:13.167091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:39:13.167121 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:39:13.167162 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:39:13.167190 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:39:13.167221 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:39:13.167251 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:39:13.167282 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:39:13.167381 kernel: ACPI: bus type drm_connector registered Jul 15 04:39:13.167413 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:39:13.167443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:13.167476 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:39:13.167504 kernel: loop: module loaded Jul 15 04:39:13.167583 systemd-journald[1526]: Collecting audit messages is disabled. Jul 15 04:39:13.167636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:39:13.167665 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:39:13.167694 systemd-journald[1526]: Journal started Jul 15 04:39:13.167743 systemd-journald[1526]: Runtime Journal (/run/log/journal/ec278dc9b81fa87f9c3898a80dfaa184) is 8M, max 75.3M, 67.3M free. Jul 15 04:39:12.468887 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:39:12.480092 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 04:39:12.480932 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:39:13.173356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:39:13.188512 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:39:13.188616 kernel: fuse: init (API version 7.41) Jul 15 04:39:13.198434 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:39:13.202619 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:39:13.203146 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:39:13.206941 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:13.207322 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:13.213370 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:39:13.229544 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:39:13.236735 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:39:13.241229 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:39:13.242831 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:39:13.295739 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:39:13.302390 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:39:13.307986 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:39:13.313085 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:39:13.319673 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:39:13.335817 kernel: loop0: detected capacity change from 0 to 61256 Jul 15 04:39:13.329961 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:39:13.335571 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Jul 15 04:39:13.335596 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Jul 15 04:39:13.356810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:39:13.369052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:39:13.376384 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:39:13.398727 systemd-journald[1526]: Time spent on flushing to /var/log/journal/ec278dc9b81fa87f9c3898a80dfaa184 is 157.070ms for 937 entries. Jul 15 04:39:13.398727 systemd-journald[1526]: System Journal (/var/log/journal/ec278dc9b81fa87f9c3898a80dfaa184) is 8M, max 195.6M, 187.6M free. Jul 15 04:39:13.573719 systemd-journald[1526]: Received client request to flush runtime journal. Jul 15 04:39:13.575393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:39:13.575447 kernel: loop1: detected capacity change from 0 to 105936 Jul 15 04:39:13.490481 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:39:13.495772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:39:13.500798 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:39:13.508491 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:39:13.534174 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:39:13.552594 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:39:13.563049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:39:13.581136 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:39:13.611359 kernel: loop2: detected capacity change from 0 to 203944 Jul 15 04:39:13.645920 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jul 15 04:39:13.645964 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jul 15 04:39:13.659380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:39:13.736764 kernel: loop3: detected capacity change from 0 to 134232 Jul 15 04:39:13.806331 kernel: loop4: detected capacity change from 0 to 61256 Jul 15 04:39:13.834593 kernel: loop5: detected capacity change from 0 to 105936 Jul 15 04:39:13.867326 kernel: loop6: detected capacity change from 0 to 203944 Jul 15 04:39:13.911351 kernel: loop7: detected capacity change from 0 to 134232 Jul 15 04:39:13.954042 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 04:39:13.956489 (sd-merge)[1604]: Merged extensions into '/usr'. Jul 15 04:39:13.968535 systemd[1]: Reload requested from client PID 1556 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:39:13.968562 systemd[1]: Reloading... Jul 15 04:39:14.113333 zram_generator::config[1635]: No configuration found. Jul 15 04:39:14.215178 ldconfig[1551]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:39:14.398106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:14.594485 systemd[1]: Reloading finished in 625 ms. Jul 15 04:39:14.617963 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:39:14.621063 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:39:14.624515 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:39:14.644546 systemd[1]: Starting ensure-sysext.service... Jul 15 04:39:14.648332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:39:14.662060 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:39:14.689658 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:39:14.689690 systemd[1]: Reloading... Jul 15 04:39:14.693089 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:39:14.694196 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:39:14.694985 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:39:14.695650 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:39:14.697710 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:39:14.698708 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Jul 15 04:39:14.698844 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Jul 15 04:39:14.707364 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:39:14.707520 systemd-tmpfiles[1684]: Skipping /boot Jul 15 04:39:14.729875 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:39:14.730035 systemd-tmpfiles[1684]: Skipping /boot Jul 15 04:39:14.797588 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Jul 15 04:39:14.857405 zram_generator::config[1711]: No configuration found. Jul 15 04:39:15.119616 (udev-worker)[1733]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:39:15.228964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:15.481196 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 04:39:15.482843 systemd[1]: Reloading finished in 792 ms. Jul 15 04:39:15.538918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:39:15.544370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:39:15.639543 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:39:15.648602 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:39:15.655741 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:39:15.665215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:39:15.673556 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:39:15.681721 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:39:15.703102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:15.705602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:39:15.714809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:39:15.720014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:39:15.723478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:15.723894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:15.754651 systemd[1]: Finished ensure-sysext.service. Jul 15 04:39:15.757761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:39:15.760923 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:39:15.763641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:39:15.763709 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:39:15.763817 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:39:15.771132 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:39:15.777644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:39:15.808005 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:39:15.863170 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:39:15.873485 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:39:15.878928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:39:15.882387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:39:15.907918 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:39:15.909359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:39:15.925963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:39:15.926385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:39:15.929488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:39:15.933636 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:39:15.934411 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:39:15.937721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:39:15.948959 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:39:15.974052 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:39:15.981495 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:39:16.035105 augenrules[1939]: No rules Jul 15 04:39:16.039112 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:39:16.040772 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:39:16.164029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:39:16.183065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 04:39:16.187968 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:39:16.241381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:39:16.257988 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:39:16.372456 systemd-networkd[1869]: lo: Link UP Jul 15 04:39:16.372471 systemd-networkd[1869]: lo: Gained carrier Jul 15 04:39:16.375905 systemd-networkd[1869]: Enumeration completed Jul 15 04:39:16.376253 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:39:16.377316 systemd-networkd[1869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:16.377523 systemd-networkd[1869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:39:16.382168 systemd-networkd[1869]: eth0: Link UP Jul 15 04:39:16.382624 systemd-networkd[1869]: eth0: Gained carrier Jul 15 04:39:16.382787 systemd-networkd[1869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:39:16.384129 systemd-resolved[1870]: Positive Trust Anchors: Jul 15 04:39:16.384164 systemd-resolved[1870]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:39:16.384228 systemd-resolved[1870]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:39:16.385832 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:39:16.396780 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:39:16.401419 systemd-networkd[1869]: eth0: DHCPv4 address 172.31.22.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 04:39:16.409281 systemd-resolved[1870]: Defaulting to hostname 'linux'. Jul 15 04:39:16.412412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:39:16.415029 systemd[1]: Reached target network.target - Network. Jul 15 04:39:16.417175 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:39:16.419868 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:39:16.423622 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:39:16.432210 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:39:16.435259 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:39:16.437959 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:39:16.440759 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:39:16.443537 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:39:16.443595 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:39:16.445985 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:39:16.449664 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:39:16.454789 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:39:16.461687 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:39:16.464711 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:39:16.467352 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:39:16.479456 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:39:16.482916 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:39:16.487137 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:39:16.490161 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:39:16.493513 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:39:16.495800 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:39:16.498604 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:39:16.498792 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:39:16.502448 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:39:16.509644 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 04:39:16.518726 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:39:16.524238 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:39:16.532590 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:39:16.545598 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:39:16.548595 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:39:16.552042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:39:16.558337 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 04:39:16.566760 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:39:16.582385 jq[1970]: false Jul 15 04:39:16.583339 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 04:39:16.595858 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:39:16.606430 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:39:16.623732 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:39:16.628025 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:39:16.629646 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:39:16.640621 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:39:16.653094 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:39:16.679368 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:39:16.686016 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:39:16.686492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:39:16.733464 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:39:16.735898 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:39:16.748910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:39:16.750574 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:39:16.765029 jq[1983]: true Jul 15 04:39:16.785450 extend-filesystems[1971]: Found /dev/nvme0n1p6 Jul 15 04:39:16.822634 extend-filesystems[1971]: Found /dev/nvme0n1p9 Jul 15 04:39:16.822377 (ntainerd)[2009]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:39:16.852680 jq[2008]: true Jul 15 04:39:16.853779 extend-filesystems[1971]: Checking size of /dev/nvme0n1p9 Jul 15 04:39:16.873312 update_engine[1981]: I20250715 04:39:16.864907 1981 main.cc:92] Flatcar Update Engine starting Jul 15 04:39:16.891475 tar[2002]: linux-arm64/helm Jul 15 04:39:16.895060 dbus-daemon[1968]: [system] SELinux support is enabled Jul 15 04:39:16.896592 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:39:16.905706 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:39:16.905926 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:39:16.908846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:39:16.908880 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:39:16.940483 extend-filesystems[1971]: Resized partition /dev/nvme0n1p9 Jul 15 04:39:16.945802 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:30 UTC 2025 (1): Starting Jul 15 04:39:16.955794 extend-filesystems[2025]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:30 UTC 2025 (1): Starting Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: ---------------------------------------------------- Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: available at https://www.nwtime.org/support Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: ---------------------------------------------------- Jul 15 04:39:16.961423 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: proto: precision = 0.096 usec (-23) Jul 15 04:39:16.945867 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 04:39:16.976686 coreos-metadata[1967]: Jul 15 04:39:16.976 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 04:39:16.976686 coreos-metadata[1967]: Jul 15 04:39:16.976 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 04:39:16.964040 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 04:39:16.977157 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: basedate set to 2025-07-03 Jul 15 04:39:16.977157 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: gps base set to 2025-07-06 (week 2374) Jul 15 04:39:16.945886 ntpd[1973]: ---------------------------------------------------- Jul 15 04:39:16.983357 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 04:39:16.971902 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:39:16.983521 coreos-metadata[1967]: Jul 15 04:39:16.979 INFO Fetch successful Jul 15 04:39:16.983521 coreos-metadata[1967]: Jul 15 04:39:16.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 04:39:16.983521 coreos-metadata[1967]: Jul 15 04:39:16.981 INFO Fetch successful Jul 15 04:39:16.983521 coreos-metadata[1967]: Jul 15 04:39:16.981 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 04:39:16.983521 coreos-metadata[1967]: Jul 15 04:39:16.981 INFO Fetch successful Jul 15 04:39:16.945902 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Listen normally on 3 eth0 172.31.22.130:123 Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: bind(21) AF_INET6 fe80::470:feff:fec6:236b%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: unable to create socket on eth0 (5) for fe80::470:feff:fec6:236b%2#123 Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: failed to init interface for address fe80::470:feff:fec6:236b%2 Jul 15 04:39:16.983982 ntpd[1973]: 15 Jul 04:39:16 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 15 04:39:16.977429 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 04:39:16.946012 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 04:39:16.999101 update_engine[1981]: I20250715 04:39:16.985955 1981 update_check_scheduler.cc:74] Next update check in 5m32s Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.981 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.987 INFO Fetch successful Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.987 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.989 INFO Fetch failed with 404: resource not found Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.989 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.989 INFO Fetch successful Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.989 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.992 INFO Fetch successful Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.994 INFO Fetch successful Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.994 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.996 INFO Fetch successful Jul 15 04:39:16.999155 coreos-metadata[1967]: Jul 15 04:39:16.996 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 04:39:16.946036 ntpd[1973]: corporation. Support and training for ntp-4 are Jul 15 04:39:16.946052 ntpd[1973]: available at https://www.nwtime.org/support Jul 15 04:39:16.946069 ntpd[1973]: ---------------------------------------------------- Jul 15 04:39:16.947422 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1869 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 04:39:16.955249 ntpd[1973]: proto: precision = 0.096 usec (-23) Jul 15 04:39:16.956214 ntpd[1973]: basedate set to 2025-07-03 Jul 15 04:39:16.963054 ntpd[1973]: gps base set to 2025-07-06 (week 2374) Jul 15 04:39:16.978148 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 04:39:16.978217 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 04:39:16.978508 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 04:39:16.978567 ntpd[1973]: Listen normally on 3 eth0 172.31.22.130:123 Jul 15 04:39:17.012153 coreos-metadata[1967]: Jul 15 04:39:17.002 INFO Fetch successful Jul 15 04:39:17.012216 ntpd[1973]: 15 Jul 04:39:17 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:39:17.012216 ntpd[1973]: 15 Jul 04:39:17 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:39:16.978638 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jul 15 04:39:16.978703 ntpd[1973]: bind(21) AF_INET6 fe80::470:feff:fec6:236b%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 04:39:16.978739 ntpd[1973]: unable to create socket on eth0 (5) for fe80::470:feff:fec6:236b%2#123 Jul 15 04:39:16.978763 ntpd[1973]: failed to init interface for address fe80::470:feff:fec6:236b%2 Jul 15 04:39:16.978809 ntpd[1973]: Listening on routing socket on fd #21 for interface updates Jul 15 04:39:17.001561 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:39:17.001619 ntpd[1973]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:39:17.046627 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:39:17.075950 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 04:39:17.096319 extend-filesystems[2025]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 04:39:17.096319 extend-filesystems[2025]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 04:39:17.096319 extend-filesystems[2025]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 04:39:17.135955 extend-filesystems[1971]: Resized filesystem in /dev/nvme0n1p9 Jul 15 04:39:17.101800 systemd-logind[1980]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 04:39:17.101835 systemd-logind[1980]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 15 04:39:17.103121 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:39:17.110565 systemd-logind[1980]: New seat seat0. Jul 15 04:39:17.114118 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:39:17.124538 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:39:17.172798 bash[2053]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:39:17.183367 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:39:17.196430 systemd[1]: Starting sshkeys.service... Jul 15 04:39:17.227015 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 04:39:17.234701 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:39:17.237712 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:39:17.344796 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 04:39:17.352950 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 04:39:17.499494 systemd-networkd[1869]: eth0: Gained IPv6LL Jul 15 04:39:17.512410 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:39:17.516376 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:39:17.525670 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 04:39:17.539580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:17.548136 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:39:17.756172 coreos-metadata[2081]: Jul 15 04:39:17.755 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 04:39:17.760464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:39:17.776247 coreos-metadata[2081]: Jul 15 04:39:17.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 04:39:17.781370 coreos-metadata[2081]: Jul 15 04:39:17.781 INFO Fetch successful Jul 15 04:39:17.781370 coreos-metadata[2081]: Jul 15 04:39:17.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 04:39:17.785623 coreos-metadata[2081]: Jul 15 04:39:17.785 INFO Fetch successful Jul 15 04:39:17.789946 unknown[2081]: wrote ssh authorized keys file for user: core Jul 15 04:39:17.827686 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 04:39:17.847599 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 04:39:17.853481 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 04:39:17.868983 amazon-ssm-agent[2111]: Initializing new seelog logger Jul 15 04:39:17.868983 amazon-ssm-agent[2111]: New Seelog Logger Creation Complete Jul 15 04:39:17.870703 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.870703 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.873769 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 processing appconfig overrides Jul 15 04:39:17.874640 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.879326 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.879326 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 processing appconfig overrides Jul 15 04:39:17.879326 amazon-ssm-agent[2111]: 2025-07-15 04:39:17.8741 INFO Proxy environment variables: Jul 15 04:39:17.879326 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.879326 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.879326 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 processing appconfig overrides Jul 15 04:39:17.889626 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 04:39:17.905197 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.913051 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:17.913051 amazon-ssm-agent[2111]: 2025/07/15 04:39:17 processing appconfig overrides Jul 15 04:39:17.939752 update-ssh-keys[2159]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:39:17.942266 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 04:39:17.960359 systemd[1]: Finished sshkeys.service. Jul 15 04:39:17.985654 amazon-ssm-agent[2111]: 2025-07-15 04:39:17.8742 INFO https_proxy: Jul 15 04:39:17.986776 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:39:18.093384 amazon-ssm-agent[2111]: 2025-07-15 04:39:17.8742 INFO http_proxy: Jul 15 04:39:18.194321 amazon-ssm-agent[2111]: 2025-07-15 04:39:17.8742 INFO no_proxy: Jul 15 04:39:18.293777 amazon-ssm-agent[2111]: 2025-07-15 04:39:17.8782 INFO Checking if agent identity type OnPrem can be assumed Jul 15 04:39:18.299128 containerd[2009]: time="2025-07-15T04:39:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:39:18.312933 containerd[2009]: time="2025-07-15T04:39:18.311484502Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:39:18.329326 sshd_keygen[1997]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:39:18.398381 amazon-ssm-agent[2111]: 2025-07-15 04:39:17.8785 INFO Checking if agent identity type EC2 can be assumed Jul 15 04:39:18.412914 containerd[2009]: time="2025-07-15T04:39:18.412836383Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.076µs" Jul 15 04:39:18.412914 containerd[2009]: time="2025-07-15T04:39:18.412899203Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:39:18.413069 containerd[2009]: time="2025-07-15T04:39:18.412939091Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:39:18.413266 containerd[2009]: time="2025-07-15T04:39:18.413222963Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:39:18.416437 containerd[2009]: time="2025-07-15T04:39:18.413267243Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:39:18.416539 containerd[2009]: time="2025-07-15T04:39:18.416498747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:39:18.416719 containerd[2009]: time="2025-07-15T04:39:18.416674727Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:39:18.416771 containerd[2009]: time="2025-07-15T04:39:18.416717579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:39:18.417173 containerd[2009]: time="2025-07-15T04:39:18.417118823Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:39:18.417230 containerd[2009]: time="2025-07-15T04:39:18.417165047Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:39:18.417230 containerd[2009]: time="2025-07-15T04:39:18.417195107Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:39:18.417230 containerd[2009]: time="2025-07-15T04:39:18.417216755Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:39:18.423847 containerd[2009]: time="2025-07-15T04:39:18.423749735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:39:18.424321 containerd[2009]: time="2025-07-15T04:39:18.424255559Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:39:18.425084 containerd[2009]: time="2025-07-15T04:39:18.424777079Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:39:18.425084 containerd[2009]: time="2025-07-15T04:39:18.424821695Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:39:18.428310 containerd[2009]: time="2025-07-15T04:39:18.427357703Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:39:18.429026 containerd[2009]: time="2025-07-15T04:39:18.428972435Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:39:18.431327 containerd[2009]: time="2025-07-15T04:39:18.429161819Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:39:18.438918 containerd[2009]: time="2025-07-15T04:39:18.438845255Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:39:18.439061 containerd[2009]: time="2025-07-15T04:39:18.438948107Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:39:18.439061 containerd[2009]: time="2025-07-15T04:39:18.438991919Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:39:18.439061 containerd[2009]: time="2025-07-15T04:39:18.439021835Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:39:18.439061 containerd[2009]: time="2025-07-15T04:39:18.439051043Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:39:18.439224 containerd[2009]: time="2025-07-15T04:39:18.439083023Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:39:18.439224 containerd[2009]: time="2025-07-15T04:39:18.439111655Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:39:18.439224 containerd[2009]: time="2025-07-15T04:39:18.439139771Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:39:18.439224 containerd[2009]: time="2025-07-15T04:39:18.439170479Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:39:18.439224 containerd[2009]: time="2025-07-15T04:39:18.439196699Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:39:18.439450 containerd[2009]: time="2025-07-15T04:39:18.439220675Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:39:18.439450 containerd[2009]: time="2025-07-15T04:39:18.439250591Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:39:18.442229 containerd[2009]: time="2025-07-15T04:39:18.442161635Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:39:18.442347 containerd[2009]: time="2025-07-15T04:39:18.442265183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:39:18.442395 containerd[2009]: time="2025-07-15T04:39:18.442351979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:39:18.442463 containerd[2009]: time="2025-07-15T04:39:18.442382855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:39:18.442463 containerd[2009]: time="2025-07-15T04:39:18.442440719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:39:18.443917 containerd[2009]: time="2025-07-15T04:39:18.442469555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:39:18.444004 containerd[2009]: time="2025-07-15T04:39:18.443947055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:39:18.444004 containerd[2009]: time="2025-07-15T04:39:18.443986271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:39:18.444121 containerd[2009]: time="2025-07-15T04:39:18.444041255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:39:18.444121 containerd[2009]: time="2025-07-15T04:39:18.444075971Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:39:18.446756 containerd[2009]: time="2025-07-15T04:39:18.446681543Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:39:18.448313 containerd[2009]: time="2025-07-15T04:39:18.447136055Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:39:18.448313 containerd[2009]: time="2025-07-15T04:39:18.447178943Z" level=info msg="Start snapshots syncer" Jul 15 04:39:18.448313 containerd[2009]: time="2025-07-15T04:39:18.447229859Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.447627743Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.447716339Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.447829055Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.448090883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.448137083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.448164335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.448190987Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.448227899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:39:18.448502 containerd[2009]: time="2025-07-15T04:39:18.448255163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:39:18.454309 containerd[2009]: time="2025-07-15T04:39:18.452949299Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:39:18.456890 containerd[2009]: time="2025-07-15T04:39:18.455025047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:39:18.456890 containerd[2009]: time="2025-07-15T04:39:18.455528267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:39:18.456890 containerd[2009]: time="2025-07-15T04:39:18.455570351Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:39:18.456890 containerd[2009]: time="2025-07-15T04:39:18.455757551Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:39:18.456890 containerd[2009]: time="2025-07-15T04:39:18.456658763Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:39:18.456890 containerd[2009]: time="2025-07-15T04:39:18.456686891Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:39:18.457364 containerd[2009]: time="2025-07-15T04:39:18.456713759Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:39:18.457424 containerd[2009]: time="2025-07-15T04:39:18.457376255Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:39:18.457486 containerd[2009]: time="2025-07-15T04:39:18.457414475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:39:18.457486 containerd[2009]: time="2025-07-15T04:39:18.457466327Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:39:18.457822 containerd[2009]: time="2025-07-15T04:39:18.457775219Z" level=info msg="runtime interface created" Jul 15 04:39:18.457822 containerd[2009]: time="2025-07-15T04:39:18.457809371Z" level=info msg="created NRI interface" Jul 15 04:39:18.457943 containerd[2009]: time="2025-07-15T04:39:18.457860875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:39:18.457943 containerd[2009]: time="2025-07-15T04:39:18.457895699Z" level=info msg="Connect containerd service" Jul 15 04:39:18.461333 containerd[2009]: time="2025-07-15T04:39:18.458334731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:39:18.466904 containerd[2009]: time="2025-07-15T04:39:18.466433939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:39:18.480664 polkitd[2166]: Started polkitd version 126 Jul 15 04:39:18.496088 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2192 INFO Agent will take identity from EC2 Jul 15 04:39:18.502537 polkitd[2166]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 04:39:18.504959 polkitd[2166]: Loading rules from directory /run/polkit-1/rules.d Jul 15 04:39:18.505050 polkitd[2166]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 04:39:18.505734 polkitd[2166]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 04:39:18.505805 polkitd[2166]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 04:39:18.505886 polkitd[2166]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 04:39:18.509143 polkitd[2166]: Finished loading, compiling and executing 2 rules Jul 15 04:39:18.509569 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 04:39:18.517152 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 04:39:18.521936 polkitd[2166]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 04:39:18.554303 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:39:18.562920 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:39:18.568177 systemd[1]: Started sshd@0-172.31.22.130:22-139.178.89.65:51074.service - OpenSSH per-connection server daemon (139.178.89.65:51074). Jul 15 04:39:18.596436 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2310 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 04:39:18.612862 systemd-hostnamed[2026]: Hostname set to (transient) Jul 15 04:39:18.617186 systemd-resolved[1870]: System hostname changed to 'ip-172-31-22-130'. Jul 15 04:39:18.639006 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:39:18.639716 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:39:18.648776 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:39:18.700411 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2310 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 15 04:39:18.736359 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:39:18.746617 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:39:18.754481 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 04:39:18.760514 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:39:18.798669 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2310 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 04:39:18.881224 containerd[2009]: time="2025-07-15T04:39:18.881027605Z" level=info msg="Start subscribing containerd event" Jul 15 04:39:18.881224 containerd[2009]: time="2025-07-15T04:39:18.881200501Z" level=info msg="Start recovering state" Jul 15 04:39:18.881632 containerd[2009]: time="2025-07-15T04:39:18.881430373Z" level=info msg="Start event monitor" Jul 15 04:39:18.881632 containerd[2009]: time="2025-07-15T04:39:18.881582305Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:39:18.881733 containerd[2009]: time="2025-07-15T04:39:18.881602621Z" level=info msg="Start streaming server" Jul 15 04:39:18.881778 containerd[2009]: time="2025-07-15T04:39:18.881737729Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:39:18.881778 containerd[2009]: time="2025-07-15T04:39:18.881757325Z" level=info msg="runtime interface starting up..." Jul 15 04:39:18.881778 containerd[2009]: time="2025-07-15T04:39:18.881772145Z" level=info msg="starting plugins..." Jul 15 04:39:18.887714 containerd[2009]: time="2025-07-15T04:39:18.886723009Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:39:18.892790 containerd[2009]: time="2025-07-15T04:39:18.889616953Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:39:18.892906 containerd[2009]: time="2025-07-15T04:39:18.892844809Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:39:18.898609 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2310 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 04:39:18.907062 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:39:18.913480 containerd[2009]: time="2025-07-15T04:39:18.912348745Z" level=info msg="containerd successfully booted in 0.613895s" Jul 15 04:39:18.952544 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 51074 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:18.958845 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:18.983902 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:39:18.990661 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:39:18.999445 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2310 INFO [Registrar] Starting registrar module Jul 15 04:39:19.026255 systemd-logind[1980]: New session 1 of user core. Jul 15 04:39:19.041178 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:39:19.056381 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:39:19.078034 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:39:19.086014 systemd-logind[1980]: New session c1 of user core. Jul 15 04:39:19.101318 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2424 INFO [EC2Identity] Checking disk for registration info Jul 15 04:39:19.169045 tar[2002]: linux-arm64/LICENSE Jul 15 04:39:19.169574 tar[2002]: linux-arm64/README.md Jul 15 04:39:19.203519 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2425 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 04:39:19.206052 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:39:19.304313 amazon-ssm-agent[2111]: 2025-07-15 04:39:18.2425 INFO [EC2Identity] Generating registration keypair Jul 15 04:39:19.429534 systemd[2232]: Queued start job for default target default.target. Jul 15 04:39:19.437808 systemd[2232]: Created slice app.slice - User Application Slice. Jul 15 04:39:19.437876 systemd[2232]: Reached target paths.target - Paths. Jul 15 04:39:19.437960 systemd[2232]: Reached target timers.target - Timers. Jul 15 04:39:19.443166 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:39:19.448362 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.4479 INFO [EC2Identity] Checking write access before registering Jul 15 04:39:19.484919 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:39:19.485238 systemd[2232]: Reached target sockets.target - Sockets. Jul 15 04:39:19.485392 systemd[2232]: Reached target basic.target - Basic System. Jul 15 04:39:19.485478 systemd[2232]: Reached target default.target - Main User Target. Jul 15 04:39:19.485538 systemd[2232]: Startup finished in 373ms. Jul 15 04:39:19.485772 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:39:19.491437 amazon-ssm-agent[2111]: 2025/07/15 04:39:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:19.491437 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:39:19.491777 amazon-ssm-agent[2111]: 2025/07/15 04:39:19 processing appconfig overrides Jul 15 04:39:19.495678 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:39:19.521366 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.4490 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 04:39:19.521366 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.4909 INFO [EC2Identity] EC2 registration was successful. Jul 15 04:39:19.521366 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.4911 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 04:39:19.521597 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.4912 INFO [CredentialRefresher] credentialRefresher has started Jul 15 04:39:19.521597 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.4912 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 04:39:19.521597 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.5209 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 04:39:19.521597 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.5212 INFO [CredentialRefresher] Credentials ready Jul 15 04:39:19.549130 amazon-ssm-agent[2111]: 2025-07-15 04:39:19.5214 INFO [CredentialRefresher] Next credential rotation will be in 29.999992037 minutes Jul 15 04:39:19.656278 systemd[1]: Started sshd@1-172.31.22.130:22-139.178.89.65:44324.service - OpenSSH per-connection server daemon (139.178.89.65:44324). Jul 15 04:39:19.866117 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 44324 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:19.867212 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:19.876782 systemd-logind[1980]: New session 2 of user core. Jul 15 04:39:19.887598 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:39:19.946991 ntpd[1973]: Listen normally on 6 eth0 [fe80::470:feff:fec6:236b%2]:123 Jul 15 04:39:19.947883 ntpd[1973]: 15 Jul 04:39:19 ntpd[1973]: Listen normally on 6 eth0 [fe80::470:feff:fec6:236b%2]:123 Jul 15 04:39:20.011421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:20.015341 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:39:20.023385 systemd[1]: Startup finished in 3.717s (kernel) + 8.737s (initrd) + 8.662s (userspace) = 21.117s. Jul 15 04:39:20.024480 sshd[2249]: Connection closed by 139.178.89.65 port 44324 Jul 15 04:39:20.024648 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:20.031172 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:20.033048 systemd[1]: sshd@1-172.31.22.130:22-139.178.89.65:44324.service: Deactivated successfully. Jul 15 04:39:20.041806 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 04:39:20.046717 systemd-logind[1980]: Session 2 logged out. Waiting for processes to exit. Jul 15 04:39:20.075150 systemd[1]: Started sshd@2-172.31.22.130:22-139.178.89.65:44336.service - OpenSSH per-connection server daemon (139.178.89.65:44336). Jul 15 04:39:20.077465 systemd-logind[1980]: Removed session 2. Jul 15 04:39:20.273747 sshd[2265]: Accepted publickey for core from 139.178.89.65 port 44336 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:20.276626 sshd-session[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:20.286325 systemd-logind[1980]: New session 3 of user core. Jul 15 04:39:20.293571 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:39:20.420645 sshd[2272]: Connection closed by 139.178.89.65 port 44336 Jul 15 04:39:20.422572 sshd-session[2265]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:20.431356 systemd-logind[1980]: Session 3 logged out. Waiting for processes to exit. Jul 15 04:39:20.432458 systemd[1]: sshd@2-172.31.22.130:22-139.178.89.65:44336.service: Deactivated successfully. Jul 15 04:39:20.436598 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 04:39:20.440377 systemd-logind[1980]: Removed session 3. Jul 15 04:39:20.553391 amazon-ssm-agent[2111]: 2025-07-15 04:39:20.5495 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 04:39:20.654521 amazon-ssm-agent[2111]: 2025-07-15 04:39:20.5528 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2279) started Jul 15 04:39:20.755303 amazon-ssm-agent[2111]: 2025-07-15 04:39:20.5529 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 04:39:21.142919 kubelet[2256]: E0715 04:39:21.142824 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:21.146920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:21.147241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:21.147885 systemd[1]: kubelet.service: Consumed 1.523s CPU time, 258.3M memory peak. Jul 15 04:39:30.454181 systemd[1]: Started sshd@3-172.31.22.130:22-139.178.89.65:57554.service - OpenSSH per-connection server daemon (139.178.89.65:57554). Jul 15 04:39:30.659272 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 57554 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:30.661575 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:30.670370 systemd-logind[1980]: New session 4 of user core. Jul 15 04:39:30.680580 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:39:30.805458 sshd[2295]: Connection closed by 139.178.89.65 port 57554 Jul 15 04:39:30.806478 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:30.813186 systemd[1]: sshd@3-172.31.22.130:22-139.178.89.65:57554.service: Deactivated successfully. Jul 15 04:39:30.817103 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:39:30.818876 systemd-logind[1980]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:39:30.821663 systemd-logind[1980]: Removed session 4. Jul 15 04:39:30.845341 systemd[1]: Started sshd@4-172.31.22.130:22-139.178.89.65:57558.service - OpenSSH per-connection server daemon (139.178.89.65:57558). Jul 15 04:39:31.046095 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 57558 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:31.048364 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:31.056086 systemd-logind[1980]: New session 5 of user core. Jul 15 04:39:31.065519 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:39:31.184072 sshd[2304]: Connection closed by 139.178.89.65 port 57558 Jul 15 04:39:31.185090 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:31.191831 systemd[1]: sshd@4-172.31.22.130:22-139.178.89.65:57558.service: Deactivated successfully. Jul 15 04:39:31.195857 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:39:31.198757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:39:31.201083 systemd-logind[1980]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:39:31.203358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:31.220925 systemd-logind[1980]: Removed session 5. Jul 15 04:39:31.226771 systemd[1]: Started sshd@5-172.31.22.130:22-139.178.89.65:57564.service - OpenSSH per-connection server daemon (139.178.89.65:57564). Jul 15 04:39:31.419047 sshd[2313]: Accepted publickey for core from 139.178.89.65 port 57564 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:31.421468 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:31.430272 systemd-logind[1980]: New session 6 of user core. Jul 15 04:39:31.440601 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:39:31.559442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:31.569832 sshd[2316]: Connection closed by 139.178.89.65 port 57564 Jul 15 04:39:31.569640 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:31.576822 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:31.577563 systemd[1]: sshd@5-172.31.22.130:22-139.178.89.65:57564.service: Deactivated successfully. Jul 15 04:39:31.584410 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:39:31.587830 systemd-logind[1980]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:39:31.607709 systemd[1]: Started sshd@6-172.31.22.130:22-139.178.89.65:57574.service - OpenSSH per-connection server daemon (139.178.89.65:57574). Jul 15 04:39:31.609413 systemd-logind[1980]: Removed session 6. Jul 15 04:39:31.683664 kubelet[2323]: E0715 04:39:31.683494 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:31.691007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:31.691346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:31.693470 systemd[1]: kubelet.service: Consumed 328ms CPU time, 106.7M memory peak. Jul 15 04:39:31.817254 sshd[2331]: Accepted publickey for core from 139.178.89.65 port 57574 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:31.819568 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:31.827485 systemd-logind[1980]: New session 7 of user core. Jul 15 04:39:31.832529 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:39:31.953616 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:39:31.954230 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:31.973553 sudo[2339]: pam_unix(sudo:session): session closed for user root Jul 15 04:39:31.997432 sshd[2338]: Connection closed by 139.178.89.65 port 57574 Jul 15 04:39:31.998467 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:32.005830 systemd[1]: sshd@6-172.31.22.130:22-139.178.89.65:57574.service: Deactivated successfully. Jul 15 04:39:32.009255 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:39:32.010864 systemd-logind[1980]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:39:32.013427 systemd-logind[1980]: Removed session 7. Jul 15 04:39:32.033497 systemd[1]: Started sshd@7-172.31.22.130:22-139.178.89.65:57582.service - OpenSSH per-connection server daemon (139.178.89.65:57582). Jul 15 04:39:32.225130 sshd[2345]: Accepted publickey for core from 139.178.89.65 port 57582 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:32.227639 sshd-session[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:32.235342 systemd-logind[1980]: New session 8 of user core. Jul 15 04:39:32.246589 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:39:32.350006 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:39:32.351121 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:32.359233 sudo[2350]: pam_unix(sudo:session): session closed for user root Jul 15 04:39:32.369209 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:39:32.369847 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:32.387790 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:39:32.460964 augenrules[2372]: No rules Jul 15 04:39:32.463251 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:39:32.464402 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:39:32.466770 sudo[2349]: pam_unix(sudo:session): session closed for user root Jul 15 04:39:32.490408 sshd[2348]: Connection closed by 139.178.89.65 port 57582 Jul 15 04:39:32.490185 sshd-session[2345]: pam_unix(sshd:session): session closed for user core Jul 15 04:39:32.498267 systemd-logind[1980]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:39:32.498854 systemd[1]: sshd@7-172.31.22.130:22-139.178.89.65:57582.service: Deactivated successfully. Jul 15 04:39:32.503443 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:39:32.507106 systemd-logind[1980]: Removed session 8. Jul 15 04:39:32.525992 systemd[1]: Started sshd@8-172.31.22.130:22-139.178.89.65:57594.service - OpenSSH per-connection server daemon (139.178.89.65:57594). Jul 15 04:39:32.723102 sshd[2381]: Accepted publickey for core from 139.178.89.65 port 57594 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:39:32.725479 sshd-session[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:39:32.733098 systemd-logind[1980]: New session 9 of user core. Jul 15 04:39:32.741502 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:39:32.846222 sudo[2385]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:39:32.847416 sudo[2385]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:39:33.383959 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:39:33.398042 (dockerd)[2402]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:39:33.778648 dockerd[2402]: time="2025-07-15T04:39:33.777486189Z" level=info msg="Starting up" Jul 15 04:39:33.780996 dockerd[2402]: time="2025-07-15T04:39:33.780799291Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:39:33.800707 dockerd[2402]: time="2025-07-15T04:39:33.800651041Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:39:33.826895 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1499116640-merged.mount: Deactivated successfully. Jul 15 04:39:33.840683 systemd[1]: var-lib-docker-metacopy\x2dcheck3419475084-merged.mount: Deactivated successfully. Jul 15 04:39:33.859231 dockerd[2402]: time="2025-07-15T04:39:33.859157857Z" level=info msg="Loading containers: start." Jul 15 04:39:33.874452 kernel: Initializing XFRM netlink socket Jul 15 04:39:34.191321 (udev-worker)[2424]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:39:34.264977 systemd-networkd[1869]: docker0: Link UP Jul 15 04:39:34.270913 dockerd[2402]: time="2025-07-15T04:39:34.270843104Z" level=info msg="Loading containers: done." Jul 15 04:39:34.297612 dockerd[2402]: time="2025-07-15T04:39:34.297542408Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:39:34.297811 dockerd[2402]: time="2025-07-15T04:39:34.297660081Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:39:34.297865 dockerd[2402]: time="2025-07-15T04:39:34.297805341Z" level=info msg="Initializing buildkit" Jul 15 04:39:34.335260 dockerd[2402]: time="2025-07-15T04:39:34.335168558Z" level=info msg="Completed buildkit initialization" Jul 15 04:39:34.351959 dockerd[2402]: time="2025-07-15T04:39:34.351868632Z" level=info msg="Daemon has completed initialization" Jul 15 04:39:34.352350 dockerd[2402]: time="2025-07-15T04:39:34.352131912Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:39:34.353730 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:39:34.819590 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1556184075-merged.mount: Deactivated successfully. Jul 15 04:39:35.474400 containerd[2009]: time="2025-07-15T04:39:35.474334525Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 04:39:36.030827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919913512.mount: Deactivated successfully. Jul 15 04:39:37.199429 containerd[2009]: time="2025-07-15T04:39:37.199371280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:37.201306 containerd[2009]: time="2025-07-15T04:39:37.201227760Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 15 04:39:37.201905 containerd[2009]: time="2025-07-15T04:39:37.201867292Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:37.207637 containerd[2009]: time="2025-07-15T04:39:37.207585359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:37.209673 containerd[2009]: time="2025-07-15T04:39:37.209625971Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.735232531s" Jul 15 04:39:37.209881 containerd[2009]: time="2025-07-15T04:39:37.209827182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 15 04:39:37.213616 containerd[2009]: time="2025-07-15T04:39:37.213533076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 04:39:38.545336 containerd[2009]: time="2025-07-15T04:39:38.544397911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:38.547602 containerd[2009]: time="2025-07-15T04:39:38.547556459Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 15 04:39:38.550469 containerd[2009]: time="2025-07-15T04:39:38.550400716Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:38.554319 containerd[2009]: time="2025-07-15T04:39:38.554241147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:38.556399 containerd[2009]: time="2025-07-15T04:39:38.556323293Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.34249255s" Jul 15 04:39:38.556724 containerd[2009]: time="2025-07-15T04:39:38.556563869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 15 04:39:38.557958 containerd[2009]: time="2025-07-15T04:39:38.557880714Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 04:39:39.682317 containerd[2009]: time="2025-07-15T04:39:39.681998937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:39.684991 containerd[2009]: time="2025-07-15T04:39:39.684920795Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 15 04:39:39.687322 containerd[2009]: time="2025-07-15T04:39:39.686766096Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:39.692393 containerd[2009]: time="2025-07-15T04:39:39.692347240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:39.695947 containerd[2009]: time="2025-07-15T04:39:39.695892859Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.137926435s" Jul 15 04:39:39.696109 containerd[2009]: time="2025-07-15T04:39:39.696081932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 15 04:39:39.696812 containerd[2009]: time="2025-07-15T04:39:39.696743845Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 04:39:40.875497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110709775.mount: Deactivated successfully. Jul 15 04:39:41.438509 containerd[2009]: time="2025-07-15T04:39:41.438443996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:41.439729 containerd[2009]: time="2025-07-15T04:39:41.439657345Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 15 04:39:41.441081 containerd[2009]: time="2025-07-15T04:39:41.441006526Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:41.444012 containerd[2009]: time="2025-07-15T04:39:41.443903437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:41.445602 containerd[2009]: time="2025-07-15T04:39:41.445179731Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.748370674s" Jul 15 04:39:41.445602 containerd[2009]: time="2025-07-15T04:39:41.445230358Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 15 04:39:41.446474 containerd[2009]: time="2025-07-15T04:39:41.446400468Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 04:39:41.844143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 04:39:41.848633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:41.964453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3926716062.mount: Deactivated successfully. Jul 15 04:39:42.340560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:42.366114 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:42.494713 kubelet[2704]: E0715 04:39:42.494624 2704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:42.499402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:42.499713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:42.500488 systemd[1]: kubelet.service: Consumed 325ms CPU time, 105.1M memory peak. Jul 15 04:39:43.318681 containerd[2009]: time="2025-07-15T04:39:43.318596779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:43.320526 containerd[2009]: time="2025-07-15T04:39:43.320445882Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 15 04:39:43.321723 containerd[2009]: time="2025-07-15T04:39:43.321651063Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:43.326427 containerd[2009]: time="2025-07-15T04:39:43.326326888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:43.328655 containerd[2009]: time="2025-07-15T04:39:43.328473658Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.881861808s" Jul 15 04:39:43.328655 containerd[2009]: time="2025-07-15T04:39:43.328528111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 04:39:43.329705 containerd[2009]: time="2025-07-15T04:39:43.329645232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:39:43.775126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660053107.mount: Deactivated successfully. Jul 15 04:39:43.782167 containerd[2009]: time="2025-07-15T04:39:43.781928131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:39:43.783187 containerd[2009]: time="2025-07-15T04:39:43.783136311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 04:39:43.784199 containerd[2009]: time="2025-07-15T04:39:43.784108005Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:39:43.787478 containerd[2009]: time="2025-07-15T04:39:43.787396831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:39:43.789317 containerd[2009]: time="2025-07-15T04:39:43.788847937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.143635ms" Jul 15 04:39:43.789317 containerd[2009]: time="2025-07-15T04:39:43.788899068Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:39:43.789827 containerd[2009]: time="2025-07-15T04:39:43.789767134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 04:39:44.273019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436878801.mount: Deactivated successfully. Jul 15 04:39:46.143025 containerd[2009]: time="2025-07-15T04:39:46.142968267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:46.146443 containerd[2009]: time="2025-07-15T04:39:46.146391547Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 15 04:39:46.149093 containerd[2009]: time="2025-07-15T04:39:46.149011037Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:46.157107 containerd[2009]: time="2025-07-15T04:39:46.157016240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:39:46.164317 containerd[2009]: time="2025-07-15T04:39:46.164151830Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.374312769s" Jul 15 04:39:46.164317 containerd[2009]: time="2025-07-15T04:39:46.164215195Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 15 04:39:48.649585 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 04:39:52.594329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 04:39:52.599622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:52.939602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:52.955931 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:39:53.035598 kubelet[2840]: E0715 04:39:53.035518 2840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:39:53.039704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:39:53.040007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:39:53.041064 systemd[1]: kubelet.service: Consumed 295ms CPU time, 107M memory peak. Jul 15 04:39:56.141282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:56.141646 systemd[1]: kubelet.service: Consumed 295ms CPU time, 107M memory peak. Jul 15 04:39:56.145369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:56.192362 systemd[1]: Reload requested from client PID 2854 ('systemctl') (unit session-9.scope)... Jul 15 04:39:56.192396 systemd[1]: Reloading... Jul 15 04:39:56.434334 zram_generator::config[2904]: No configuration found. Jul 15 04:39:56.627957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:39:56.884607 systemd[1]: Reloading finished in 691 ms. Jul 15 04:39:56.993977 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:39:56.994159 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:39:56.994776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:56.994863 systemd[1]: kubelet.service: Consumed 221ms CPU time, 95M memory peak. Jul 15 04:39:56.997858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:39:57.372866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:39:57.387796 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:39:57.461988 kubelet[2961]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:39:57.462515 kubelet[2961]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:39:57.462601 kubelet[2961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:39:57.463694 kubelet[2961]: I0715 04:39:57.462909 2961 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:39:58.335414 kubelet[2961]: I0715 04:39:58.335025 2961 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:39:58.335414 kubelet[2961]: I0715 04:39:58.335087 2961 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:39:58.335908 kubelet[2961]: I0715 04:39:58.335882 2961 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:39:58.379159 kubelet[2961]: E0715 04:39:58.379086 2961 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:58.380712 kubelet[2961]: I0715 04:39:58.380660 2961 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:39:58.394775 kubelet[2961]: I0715 04:39:58.394643 2961 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:39:58.405320 kubelet[2961]: I0715 04:39:58.403677 2961 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:39:58.405320 kubelet[2961]: I0715 04:39:58.404381 2961 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:39:58.405320 kubelet[2961]: I0715 04:39:58.404651 2961 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:39:58.405320 kubelet[2961]: I0715 04:39:58.404703 2961 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:39:58.405731 kubelet[2961]: I0715 04:39:58.405326 2961 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:39:58.405731 kubelet[2961]: I0715 04:39:58.405349 2961 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:39:58.405834 kubelet[2961]: I0715 04:39:58.405809 2961 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:39:58.412818 kubelet[2961]: I0715 04:39:58.412762 2961 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:39:58.412818 kubelet[2961]: I0715 04:39:58.412821 2961 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:39:58.413015 kubelet[2961]: I0715 04:39:58.412860 2961 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:39:58.413080 kubelet[2961]: I0715 04:39:58.413022 2961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:39:58.416521 kubelet[2961]: W0715 04:39:58.416443 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-130&limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:58.416740 kubelet[2961]: E0715 04:39:58.416707 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-130&limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:58.421554 kubelet[2961]: W0715 04:39:58.421462 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:58.421709 kubelet[2961]: E0715 04:39:58.421566 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:58.422332 kubelet[2961]: I0715 04:39:58.422257 2961 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:39:58.423603 kubelet[2961]: I0715 04:39:58.423549 2961 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:39:58.423985 kubelet[2961]: W0715 04:39:58.423911 2961 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:39:58.428631 kubelet[2961]: I0715 04:39:58.428097 2961 server.go:1274] "Started kubelet" Jul 15 04:39:58.432018 kubelet[2961]: I0715 04:39:58.431937 2961 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:39:58.433952 kubelet[2961]: I0715 04:39:58.433887 2961 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:39:58.434161 kubelet[2961]: I0715 04:39:58.433862 2961 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:39:58.434992 kubelet[2961]: I0715 04:39:58.434955 2961 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:39:58.437677 kubelet[2961]: E0715 04:39:58.435357 2961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-130.185252f09fda4490 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-130,UID:ip-172-31-22-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-130,},FirstTimestamp:2025-07-15 04:39:58.428054672 +0000 UTC m=+1.033690972,LastTimestamp:2025-07-15 04:39:58.428054672 +0000 UTC m=+1.033690972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-130,}" Jul 15 04:39:58.441930 kubelet[2961]: I0715 04:39:58.439358 2961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:39:58.441930 kubelet[2961]: I0715 04:39:58.439559 2961 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:39:58.448431 kubelet[2961]: E0715 04:39:58.448376 2961 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-130\" not found" Jul 15 04:39:58.448630 kubelet[2961]: I0715 04:39:58.448611 2961 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:39:58.449099 kubelet[2961]: I0715 04:39:58.449044 2961 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:39:58.449434 kubelet[2961]: I0715 04:39:58.449412 2961 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:39:58.451557 kubelet[2961]: W0715 04:39:58.451484 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:58.451911 kubelet[2961]: E0715 04:39:58.451851 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:58.452221 kubelet[2961]: E0715 04:39:58.452173 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-130?timeout=10s\": dial tcp 172.31.22.130:6443: connect: connection refused" interval="200ms" Jul 15 04:39:58.452836 kubelet[2961]: I0715 04:39:58.452792 2961 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:39:58.453212 kubelet[2961]: I0715 04:39:58.453177 2961 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:39:58.455263 kubelet[2961]: E0715 04:39:58.455226 2961 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:39:58.455959 kubelet[2961]: I0715 04:39:58.455925 2961 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:39:58.470433 kubelet[2961]: I0715 04:39:58.470281 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:39:58.476010 kubelet[2961]: I0715 04:39:58.475956 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:39:58.476208 kubelet[2961]: I0715 04:39:58.476190 2961 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:39:58.476394 kubelet[2961]: I0715 04:39:58.476373 2961 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:39:58.476666 kubelet[2961]: E0715 04:39:58.476592 2961 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:39:58.480870 kubelet[2961]: W0715 04:39:58.480702 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:58.480870 kubelet[2961]: E0715 04:39:58.480821 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:58.500869 kubelet[2961]: I0715 04:39:58.500600 2961 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:39:58.501198 kubelet[2961]: I0715 04:39:58.501160 2961 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:39:58.501363 kubelet[2961]: I0715 04:39:58.501343 2961 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:39:58.507779 kubelet[2961]: I0715 04:39:58.507746 2961 policy_none.go:49] "None policy: Start" Jul 15 04:39:58.509593 kubelet[2961]: I0715 04:39:58.509451 2961 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:39:58.509593 kubelet[2961]: I0715 04:39:58.509506 2961 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:39:58.525915 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:39:58.544681 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:39:58.549127 kubelet[2961]: E0715 04:39:58.549062 2961 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-130\" not found" Jul 15 04:39:58.552591 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:39:58.562053 kubelet[2961]: I0715 04:39:58.561926 2961 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:39:58.562413 kubelet[2961]: I0715 04:39:58.562218 2961 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:39:58.562413 kubelet[2961]: I0715 04:39:58.562250 2961 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:39:58.563830 kubelet[2961]: I0715 04:39:58.563261 2961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:39:58.571413 kubelet[2961]: E0715 04:39:58.571355 2961 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-130\" not found" Jul 15 04:39:58.599579 systemd[1]: Created slice kubepods-burstable-pod64ab1aa13452e26cc1d87cb715753d45.slice - libcontainer container kubepods-burstable-pod64ab1aa13452e26cc1d87cb715753d45.slice. Jul 15 04:39:58.620236 systemd[1]: Created slice kubepods-burstable-pod42ff0ec985d976f5f92d59da7c25aa48.slice - libcontainer container kubepods-burstable-pod42ff0ec985d976f5f92d59da7c25aa48.slice. Jul 15 04:39:58.639907 systemd[1]: Created slice kubepods-burstable-pod8dba00ced397519a9d3457c92e5bbc2a.slice - libcontainer container kubepods-burstable-pod8dba00ced397519a9d3457c92e5bbc2a.slice. Jul 15 04:39:58.650910 kubelet[2961]: I0715 04:39:58.650785 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64ab1aa13452e26cc1d87cb715753d45-ca-certs\") pod \"kube-apiserver-ip-172-31-22-130\" (UID: \"64ab1aa13452e26cc1d87cb715753d45\") " pod="kube-system/kube-apiserver-ip-172-31-22-130" Jul 15 04:39:58.651052 kubelet[2961]: I0715 04:39:58.650927 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:39:58.651052 kubelet[2961]: I0715 04:39:58.651006 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:39:58.651152 kubelet[2961]: I0715 04:39:58.651048 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dba00ced397519a9d3457c92e5bbc2a-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-130\" (UID: \"8dba00ced397519a9d3457c92e5bbc2a\") " pod="kube-system/kube-scheduler-ip-172-31-22-130" Jul 15 04:39:58.651220 kubelet[2961]: I0715 04:39:58.651124 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64ab1aa13452e26cc1d87cb715753d45-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-130\" (UID: \"64ab1aa13452e26cc1d87cb715753d45\") " pod="kube-system/kube-apiserver-ip-172-31-22-130" Jul 15 04:39:58.651270 kubelet[2961]: I0715 04:39:58.651234 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64ab1aa13452e26cc1d87cb715753d45-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-130\" (UID: \"64ab1aa13452e26cc1d87cb715753d45\") " pod="kube-system/kube-apiserver-ip-172-31-22-130" Jul 15 04:39:58.651386 kubelet[2961]: I0715 04:39:58.651344 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:39:58.651472 kubelet[2961]: I0715 04:39:58.651447 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:39:58.651568 kubelet[2961]: I0715 04:39:58.651524 2961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:39:58.653480 kubelet[2961]: E0715 04:39:58.653414 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-130?timeout=10s\": dial tcp 172.31.22.130:6443: connect: connection refused" interval="400ms" Jul 15 04:39:58.665140 kubelet[2961]: I0715 04:39:58.665077 2961 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-130" Jul 15 04:39:58.665994 kubelet[2961]: E0715 04:39:58.665946 2961 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.130:6443/api/v1/nodes\": dial tcp 172.31.22.130:6443: connect: connection refused" node="ip-172-31-22-130" Jul 15 04:39:58.869075 kubelet[2961]: I0715 04:39:58.868958 2961 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-130" Jul 15 04:39:58.870200 kubelet[2961]: E0715 04:39:58.870075 2961 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.130:6443/api/v1/nodes\": dial tcp 172.31.22.130:6443: connect: connection refused" node="ip-172-31-22-130" Jul 15 04:39:58.916280 containerd[2009]: time="2025-07-15T04:39:58.916216578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-130,Uid:64ab1aa13452e26cc1d87cb715753d45,Namespace:kube-system,Attempt:0,}" Jul 15 04:39:58.936624 containerd[2009]: time="2025-07-15T04:39:58.936254319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-130,Uid:42ff0ec985d976f5f92d59da7c25aa48,Namespace:kube-system,Attempt:0,}" Jul 15 04:39:58.946034 containerd[2009]: time="2025-07-15T04:39:58.945963294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-130,Uid:8dba00ced397519a9d3457c92e5bbc2a,Namespace:kube-system,Attempt:0,}" Jul 15 04:39:58.971622 containerd[2009]: time="2025-07-15T04:39:58.971458352Z" level=info msg="connecting to shim cd12a0e6ad67f240d026f9251ef521dec6ba597c069d23a79af38308b51b24ef" address="unix:///run/containerd/s/3a7f745e8bc87c95fe0c15f66b1988045af0548381cba9a848ffd6e776665315" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:39:59.027619 containerd[2009]: time="2025-07-15T04:39:59.027066026Z" level=info msg="connecting to shim 8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df" address="unix:///run/containerd/s/ff542dabaa6f1b67face6c3aa9a4047f416066d73704c371f23d149dd6f278d9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:39:59.042936 containerd[2009]: time="2025-07-15T04:39:59.042883797Z" level=info msg="connecting to shim e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7" address="unix:///run/containerd/s/47ecd50eb916747597375683f1a863338ec534800f73bfc0ff2448bc8218cdff" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:39:59.045618 systemd[1]: Started cri-containerd-cd12a0e6ad67f240d026f9251ef521dec6ba597c069d23a79af38308b51b24ef.scope - libcontainer container cd12a0e6ad67f240d026f9251ef521dec6ba597c069d23a79af38308b51b24ef. Jul 15 04:39:59.055331 kubelet[2961]: E0715 04:39:59.055000 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-130?timeout=10s\": dial tcp 172.31.22.130:6443: connect: connection refused" interval="800ms" Jul 15 04:39:59.125718 systemd[1]: Started cri-containerd-e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7.scope - libcontainer container e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7. Jul 15 04:39:59.144636 systemd[1]: Started cri-containerd-8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df.scope - libcontainer container 8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df. Jul 15 04:39:59.188991 containerd[2009]: time="2025-07-15T04:39:59.187890970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-130,Uid:64ab1aa13452e26cc1d87cb715753d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd12a0e6ad67f240d026f9251ef521dec6ba597c069d23a79af38308b51b24ef\"" Jul 15 04:39:59.196609 containerd[2009]: time="2025-07-15T04:39:59.196543718Z" level=info msg="CreateContainer within sandbox \"cd12a0e6ad67f240d026f9251ef521dec6ba597c069d23a79af38308b51b24ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:39:59.228130 containerd[2009]: time="2025-07-15T04:39:59.227750199Z" level=info msg="Container 1d84b8254178c7fa39532d4e425f754f16a2c2a64d186a208f3337f014c04841: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:39:59.246397 containerd[2009]: time="2025-07-15T04:39:59.246314020Z" level=info msg="CreateContainer within sandbox \"cd12a0e6ad67f240d026f9251ef521dec6ba597c069d23a79af38308b51b24ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1d84b8254178c7fa39532d4e425f754f16a2c2a64d186a208f3337f014c04841\"" Jul 15 04:39:59.248716 containerd[2009]: time="2025-07-15T04:39:59.248648844Z" level=info msg="StartContainer for \"1d84b8254178c7fa39532d4e425f754f16a2c2a64d186a208f3337f014c04841\"" Jul 15 04:39:59.253689 containerd[2009]: time="2025-07-15T04:39:59.253612165Z" level=info msg="connecting to shim 1d84b8254178c7fa39532d4e425f754f16a2c2a64d186a208f3337f014c04841" address="unix:///run/containerd/s/3a7f745e8bc87c95fe0c15f66b1988045af0548381cba9a848ffd6e776665315" protocol=ttrpc version=3 Jul 15 04:39:59.268056 containerd[2009]: time="2025-07-15T04:39:59.267830800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-130,Uid:42ff0ec985d976f5f92d59da7c25aa48,Namespace:kube-system,Attempt:0,} returns sandbox id \"e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7\"" Jul 15 04:39:59.277215 kubelet[2961]: I0715 04:39:59.276050 2961 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-130" Jul 15 04:39:59.279609 containerd[2009]: time="2025-07-15T04:39:59.278795319Z" level=info msg="CreateContainer within sandbox \"e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:39:59.279764 kubelet[2961]: E0715 04:39:59.279428 2961 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.130:6443/api/v1/nodes\": dial tcp 172.31.22.130:6443: connect: connection refused" node="ip-172-31-22-130" Jul 15 04:39:59.287362 containerd[2009]: time="2025-07-15T04:39:59.287260841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-130,Uid:8dba00ced397519a9d3457c92e5bbc2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df\"" Jul 15 04:39:59.294946 containerd[2009]: time="2025-07-15T04:39:59.294649816Z" level=info msg="CreateContainer within sandbox \"8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:39:59.309510 containerd[2009]: time="2025-07-15T04:39:59.309447798Z" level=info msg="Container f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:39:59.313604 systemd[1]: Started cri-containerd-1d84b8254178c7fa39532d4e425f754f16a2c2a64d186a208f3337f014c04841.scope - libcontainer container 1d84b8254178c7fa39532d4e425f754f16a2c2a64d186a208f3337f014c04841. Jul 15 04:39:59.323090 containerd[2009]: time="2025-07-15T04:39:59.322998775Z" level=info msg="Container 5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:39:59.333610 kubelet[2961]: W0715 04:39:59.333235 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:59.333890 kubelet[2961]: E0715 04:39:59.333560 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:59.342941 containerd[2009]: time="2025-07-15T04:39:59.342767957Z" level=info msg="CreateContainer within sandbox \"e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e\"" Jul 15 04:39:59.347910 containerd[2009]: time="2025-07-15T04:39:59.347810271Z" level=info msg="StartContainer for \"f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e\"" Jul 15 04:39:59.352675 containerd[2009]: time="2025-07-15T04:39:59.352602162Z" level=info msg="connecting to shim f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e" address="unix:///run/containerd/s/47ecd50eb916747597375683f1a863338ec534800f73bfc0ff2448bc8218cdff" protocol=ttrpc version=3 Jul 15 04:39:59.365768 containerd[2009]: time="2025-07-15T04:39:59.365706710Z" level=info msg="CreateContainer within sandbox \"8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333\"" Jul 15 04:39:59.368079 kubelet[2961]: W0715 04:39:59.367990 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:59.368251 kubelet[2961]: E0715 04:39:59.368093 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:59.368423 containerd[2009]: time="2025-07-15T04:39:59.368367243Z" level=info msg="StartContainer for \"5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333\"" Jul 15 04:39:59.372848 containerd[2009]: time="2025-07-15T04:39:59.372779872Z" level=info msg="connecting to shim 5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333" address="unix:///run/containerd/s/ff542dabaa6f1b67face6c3aa9a4047f416066d73704c371f23d149dd6f278d9" protocol=ttrpc version=3 Jul 15 04:39:59.401845 systemd[1]: Started cri-containerd-f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e.scope - libcontainer container f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e. Jul 15 04:39:59.443747 systemd[1]: Started cri-containerd-5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333.scope - libcontainer container 5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333. Jul 15 04:39:59.452256 kubelet[2961]: W0715 04:39:59.452060 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-130&limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:59.452256 kubelet[2961]: E0715 04:39:59.452160 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-130&limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:59.464835 containerd[2009]: time="2025-07-15T04:39:59.464683142Z" level=info msg="StartContainer for \"1d84b8254178c7fa39532d4e425f754f16a2c2a64d186a208f3337f014c04841\" returns successfully" Jul 15 04:39:59.508470 kubelet[2961]: W0715 04:39:59.508250 2961 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.130:6443: connect: connection refused Jul 15 04:39:59.508470 kubelet[2961]: E0715 04:39:59.508405 2961 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.130:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:39:59.563535 containerd[2009]: time="2025-07-15T04:39:59.563440502Z" level=info msg="StartContainer for \"f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e\" returns successfully" Jul 15 04:39:59.646149 containerd[2009]: time="2025-07-15T04:39:59.646082749Z" level=info msg="StartContainer for \"5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333\" returns successfully" Jul 15 04:40:00.083104 kubelet[2961]: I0715 04:40:00.083057 2961 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-130" Jul 15 04:40:01.908438 update_engine[1981]: I20250715 04:40:01.908334 1981 update_attempter.cc:509] Updating boot flags... Jul 15 04:40:04.277087 kubelet[2961]: I0715 04:40:04.276984 2961 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-130" Jul 15 04:40:04.428211 kubelet[2961]: I0715 04:40:04.428155 2961 apiserver.go:52] "Watching apiserver" Jul 15 04:40:04.449484 kubelet[2961]: I0715 04:40:04.449420 2961 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:40:04.475378 kubelet[2961]: E0715 04:40:04.474790 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Jul 15 04:40:06.368021 systemd[1]: Reload requested from client PID 3502 ('systemctl') (unit session-9.scope)... Jul 15 04:40:06.368053 systemd[1]: Reloading... Jul 15 04:40:06.549330 zram_generator::config[3549]: No configuration found. Jul 15 04:40:06.750691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:40:07.041431 systemd[1]: Reloading finished in 672 ms. Jul 15 04:40:07.115058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:07.133236 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:40:07.133785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:07.133882 systemd[1]: kubelet.service: Consumed 1.867s CPU time, 128.8M memory peak. Jul 15 04:40:07.138856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:07.504824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:07.519267 (kubelet)[3606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:40:07.605625 kubelet[3606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:40:07.605625 kubelet[3606]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:40:07.605625 kubelet[3606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:40:07.607317 kubelet[3606]: I0715 04:40:07.606352 3606 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:40:07.619374 kubelet[3606]: I0715 04:40:07.619273 3606 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:40:07.619560 kubelet[3606]: I0715 04:40:07.619541 3606 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:40:07.620079 kubelet[3606]: I0715 04:40:07.620056 3606 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:40:07.623397 kubelet[3606]: I0715 04:40:07.623357 3606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 04:40:07.631281 kubelet[3606]: I0715 04:40:07.631238 3606 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:40:07.645598 kubelet[3606]: I0715 04:40:07.645566 3606 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:40:07.652213 kubelet[3606]: I0715 04:40:07.652144 3606 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:40:07.652637 kubelet[3606]: I0715 04:40:07.652580 3606 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:40:07.653790 kubelet[3606]: I0715 04:40:07.652992 3606 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:40:07.653790 kubelet[3606]: I0715 04:40:07.653043 3606 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:40:07.653790 kubelet[3606]: I0715 04:40:07.653362 3606 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:40:07.653790 kubelet[3606]: I0715 04:40:07.653381 3606 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:40:07.654121 kubelet[3606]: I0715 04:40:07.653445 3606 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:40:07.654121 kubelet[3606]: I0715 04:40:07.653608 3606 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:40:07.654121 kubelet[3606]: I0715 04:40:07.653631 3606 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:40:07.654121 kubelet[3606]: I0715 04:40:07.653680 3606 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:40:07.654121 kubelet[3606]: I0715 04:40:07.653707 3606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:40:07.659062 kubelet[3606]: I0715 04:40:07.659028 3606 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:40:07.663329 kubelet[3606]: I0715 04:40:07.662718 3606 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:40:07.663750 kubelet[3606]: I0715 04:40:07.663634 3606 server.go:1274] "Started kubelet" Jul 15 04:40:07.672453 kubelet[3606]: I0715 04:40:07.672414 3606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:40:07.677912 kubelet[3606]: I0715 04:40:07.677858 3606 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:40:07.683317 kubelet[3606]: I0715 04:40:07.682510 3606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:40:07.683317 kubelet[3606]: I0715 04:40:07.682916 3606 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:40:07.683317 kubelet[3606]: I0715 04:40:07.683221 3606 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:40:07.688102 kubelet[3606]: E0715 04:40:07.687953 3606 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-130\" not found" Jul 15 04:40:07.692818 kubelet[3606]: I0715 04:40:07.688426 3606 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:40:07.695438 kubelet[3606]: I0715 04:40:07.688470 3606 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:40:07.697424 kubelet[3606]: I0715 04:40:07.695745 3606 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:40:07.697424 kubelet[3606]: I0715 04:40:07.689340 3606 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:40:07.712386 kubelet[3606]: I0715 04:40:07.712351 3606 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:40:07.739322 kubelet[3606]: I0715 04:40:07.737818 3606 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:40:07.750468 kubelet[3606]: E0715 04:40:07.750423 3606 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:40:07.768313 kubelet[3606]: I0715 04:40:07.768148 3606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:40:07.772823 kubelet[3606]: I0715 04:40:07.772781 3606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:40:07.773369 kubelet[3606]: I0715 04:40:07.773341 3606 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:40:07.773542 kubelet[3606]: I0715 04:40:07.773523 3606 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:40:07.773739 kubelet[3606]: E0715 04:40:07.773705 3606 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:40:07.782878 kubelet[3606]: I0715 04:40:07.782838 3606 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:40:07.873926 kubelet[3606]: E0715 04:40:07.873872 3606 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:40:07.890278 kubelet[3606]: I0715 04:40:07.889777 3606 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:40:07.890278 kubelet[3606]: I0715 04:40:07.889808 3606 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:40:07.890278 kubelet[3606]: I0715 04:40:07.889845 3606 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:40:07.890278 kubelet[3606]: I0715 04:40:07.890161 3606 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:40:07.890278 kubelet[3606]: I0715 04:40:07.890182 3606 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:40:07.890673 kubelet[3606]: I0715 04:40:07.890650 3606 policy_none.go:49] "None policy: Start" Jul 15 04:40:07.893012 kubelet[3606]: I0715 04:40:07.892982 3606 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:40:07.893213 kubelet[3606]: I0715 04:40:07.893194 3606 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:40:07.893607 kubelet[3606]: I0715 04:40:07.893584 3606 state_mem.go:75] "Updated machine memory state" Jul 15 04:40:07.903004 kubelet[3606]: I0715 04:40:07.902955 3606 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:40:07.903389 kubelet[3606]: I0715 04:40:07.903244 3606 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:40:07.903389 kubelet[3606]: I0715 04:40:07.903276 3606 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:40:07.904955 kubelet[3606]: I0715 04:40:07.904188 3606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:40:08.034433 kubelet[3606]: I0715 04:40:08.033779 3606 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-130" Jul 15 04:40:08.049496 kubelet[3606]: I0715 04:40:08.049425 3606 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-22-130" Jul 15 04:40:08.049624 kubelet[3606]: I0715 04:40:08.049562 3606 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-130" Jul 15 04:40:08.099739 kubelet[3606]: I0715 04:40:08.099657 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:40:08.099969 kubelet[3606]: I0715 04:40:08.099918 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64ab1aa13452e26cc1d87cb715753d45-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-130\" (UID: \"64ab1aa13452e26cc1d87cb715753d45\") " pod="kube-system/kube-apiserver-ip-172-31-22-130" Jul 15 04:40:08.100244 kubelet[3606]: I0715 04:40:08.100111 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:40:08.100244 kubelet[3606]: I0715 04:40:08.100190 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:40:08.100517 kubelet[3606]: I0715 04:40:08.100438 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:40:08.100517 kubelet[3606]: I0715 04:40:08.100487 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42ff0ec985d976f5f92d59da7c25aa48-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-130\" (UID: \"42ff0ec985d976f5f92d59da7c25aa48\") " pod="kube-system/kube-controller-manager-ip-172-31-22-130" Jul 15 04:40:08.100727 kubelet[3606]: I0715 04:40:08.100672 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dba00ced397519a9d3457c92e5bbc2a-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-130\" (UID: \"8dba00ced397519a9d3457c92e5bbc2a\") " pod="kube-system/kube-scheduler-ip-172-31-22-130" Jul 15 04:40:08.100847 kubelet[3606]: I0715 04:40:08.100822 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64ab1aa13452e26cc1d87cb715753d45-ca-certs\") pod \"kube-apiserver-ip-172-31-22-130\" (UID: \"64ab1aa13452e26cc1d87cb715753d45\") " pod="kube-system/kube-apiserver-ip-172-31-22-130" Jul 15 04:40:08.101022 kubelet[3606]: I0715 04:40:08.100929 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64ab1aa13452e26cc1d87cb715753d45-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-130\" (UID: \"64ab1aa13452e26cc1d87cb715753d45\") " pod="kube-system/kube-apiserver-ip-172-31-22-130" Jul 15 04:40:08.657214 kubelet[3606]: I0715 04:40:08.657148 3606 apiserver.go:52] "Watching apiserver" Jul 15 04:40:08.696161 kubelet[3606]: I0715 04:40:08.696075 3606 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:40:08.892785 kubelet[3606]: I0715 04:40:08.892449 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-130" podStartSLOduration=0.892428099 podStartE2EDuration="892.428099ms" podCreationTimestamp="2025-07-15 04:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:08.891979967 +0000 UTC m=+1.364170842" watchObservedRunningTime="2025-07-15 04:40:08.892428099 +0000 UTC m=+1.364618962" Jul 15 04:40:08.957762 kubelet[3606]: I0715 04:40:08.956950 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-130" podStartSLOduration=0.956931267 podStartE2EDuration="956.931267ms" podCreationTimestamp="2025-07-15 04:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:08.909593144 +0000 UTC m=+1.381784031" watchObservedRunningTime="2025-07-15 04:40:08.956931267 +0000 UTC m=+1.429122118" Jul 15 04:40:08.988855 kubelet[3606]: I0715 04:40:08.988729 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-130" podStartSLOduration=0.988711385 podStartE2EDuration="988.711385ms" podCreationTimestamp="2025-07-15 04:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:08.957497792 +0000 UTC m=+1.429688679" watchObservedRunningTime="2025-07-15 04:40:08.988711385 +0000 UTC m=+1.460902260" Jul 15 04:40:12.908843 kubelet[3606]: I0715 04:40:12.908757 3606 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:40:12.912073 containerd[2009]: time="2025-07-15T04:40:12.912025727Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:40:12.913898 kubelet[3606]: I0715 04:40:12.913639 3606 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:40:13.858029 systemd[1]: Created slice kubepods-besteffort-pod58d5c423_e397_43ef_a0f0_0b8ed37bf09e.slice - libcontainer container kubepods-besteffort-pod58d5c423_e397_43ef_a0f0_0b8ed37bf09e.slice. Jul 15 04:40:13.937159 kubelet[3606]: I0715 04:40:13.937073 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swpnl\" (UniqueName: \"kubernetes.io/projected/58d5c423-e397-43ef-a0f0-0b8ed37bf09e-kube-api-access-swpnl\") pod \"kube-proxy-65rlm\" (UID: \"58d5c423-e397-43ef-a0f0-0b8ed37bf09e\") " pod="kube-system/kube-proxy-65rlm" Jul 15 04:40:13.937159 kubelet[3606]: I0715 04:40:13.937146 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58d5c423-e397-43ef-a0f0-0b8ed37bf09e-xtables-lock\") pod \"kube-proxy-65rlm\" (UID: \"58d5c423-e397-43ef-a0f0-0b8ed37bf09e\") " pod="kube-system/kube-proxy-65rlm" Jul 15 04:40:13.937159 kubelet[3606]: I0715 04:40:13.937187 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58d5c423-e397-43ef-a0f0-0b8ed37bf09e-lib-modules\") pod \"kube-proxy-65rlm\" (UID: \"58d5c423-e397-43ef-a0f0-0b8ed37bf09e\") " pod="kube-system/kube-proxy-65rlm" Jul 15 04:40:13.937855 kubelet[3606]: I0715 04:40:13.937258 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58d5c423-e397-43ef-a0f0-0b8ed37bf09e-kube-proxy\") pod \"kube-proxy-65rlm\" (UID: \"58d5c423-e397-43ef-a0f0-0b8ed37bf09e\") " pod="kube-system/kube-proxy-65rlm" Jul 15 04:40:14.038848 kubelet[3606]: I0715 04:40:14.038503 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15d070bc-ad7e-408c-b116-3f22921d91ee-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-m95xz\" (UID: \"15d070bc-ad7e-408c-b116-3f22921d91ee\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-m95xz" Jul 15 04:40:14.043248 kubelet[3606]: I0715 04:40:14.043060 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hw8\" (UniqueName: \"kubernetes.io/projected/15d070bc-ad7e-408c-b116-3f22921d91ee-kube-api-access-b8hw8\") pod \"tigera-operator-5bf8dfcb4-m95xz\" (UID: \"15d070bc-ad7e-408c-b116-3f22921d91ee\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-m95xz" Jul 15 04:40:14.049197 systemd[1]: Created slice kubepods-besteffort-pod15d070bc_ad7e_408c_b116_3f22921d91ee.slice - libcontainer container kubepods-besteffort-pod15d070bc_ad7e_408c_b116_3f22921d91ee.slice. Jul 15 04:40:14.173402 containerd[2009]: time="2025-07-15T04:40:14.173025643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-65rlm,Uid:58d5c423-e397-43ef-a0f0-0b8ed37bf09e,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:14.213948 containerd[2009]: time="2025-07-15T04:40:14.213775846Z" level=info msg="connecting to shim a59a896c773452a29be132ea0732add1ae46e7040bf62747dfd73cc59eadeb3f" address="unix:///run/containerd/s/a2a55b405089e3d54e7224f6b27392c37c9823932e8f00e8444eb14430090936" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:14.261611 systemd[1]: Started cri-containerd-a59a896c773452a29be132ea0732add1ae46e7040bf62747dfd73cc59eadeb3f.scope - libcontainer container a59a896c773452a29be132ea0732add1ae46e7040bf62747dfd73cc59eadeb3f. Jul 15 04:40:14.323363 containerd[2009]: time="2025-07-15T04:40:14.323234134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-65rlm,Uid:58d5c423-e397-43ef-a0f0-0b8ed37bf09e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a59a896c773452a29be132ea0732add1ae46e7040bf62747dfd73cc59eadeb3f\"" Jul 15 04:40:14.333333 containerd[2009]: time="2025-07-15T04:40:14.331848093Z" level=info msg="CreateContainer within sandbox \"a59a896c773452a29be132ea0732add1ae46e7040bf62747dfd73cc59eadeb3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:40:14.357584 containerd[2009]: time="2025-07-15T04:40:14.357508224Z" level=info msg="Container 17a39821b7f0b92458ab98683630c7949279f340890aea7db8c2ea564666371b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:14.361076 containerd[2009]: time="2025-07-15T04:40:14.360251504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-m95xz,Uid:15d070bc-ad7e-408c-b116-3f22921d91ee,Namespace:tigera-operator,Attempt:0,}" Jul 15 04:40:14.377028 containerd[2009]: time="2025-07-15T04:40:14.376960981Z" level=info msg="CreateContainer within sandbox \"a59a896c773452a29be132ea0732add1ae46e7040bf62747dfd73cc59eadeb3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17a39821b7f0b92458ab98683630c7949279f340890aea7db8c2ea564666371b\"" Jul 15 04:40:14.381037 containerd[2009]: time="2025-07-15T04:40:14.380958857Z" level=info msg="StartContainer for \"17a39821b7f0b92458ab98683630c7949279f340890aea7db8c2ea564666371b\"" Jul 15 04:40:14.384779 containerd[2009]: time="2025-07-15T04:40:14.384718160Z" level=info msg="connecting to shim 17a39821b7f0b92458ab98683630c7949279f340890aea7db8c2ea564666371b" address="unix:///run/containerd/s/a2a55b405089e3d54e7224f6b27392c37c9823932e8f00e8444eb14430090936" protocol=ttrpc version=3 Jul 15 04:40:14.423282 containerd[2009]: time="2025-07-15T04:40:14.423100496Z" level=info msg="connecting to shim 7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87" address="unix:///run/containerd/s/4ef28b0d5f56edb296b8487fbeef46a9847113657bcbfdbb81b7afc24a44d2ad" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:14.432636 systemd[1]: Started cri-containerd-17a39821b7f0b92458ab98683630c7949279f340890aea7db8c2ea564666371b.scope - libcontainer container 17a39821b7f0b92458ab98683630c7949279f340890aea7db8c2ea564666371b. Jul 15 04:40:14.482615 systemd[1]: Started cri-containerd-7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87.scope - libcontainer container 7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87. Jul 15 04:40:14.560854 containerd[2009]: time="2025-07-15T04:40:14.560795802Z" level=info msg="StartContainer for \"17a39821b7f0b92458ab98683630c7949279f340890aea7db8c2ea564666371b\" returns successfully" Jul 15 04:40:14.607488 containerd[2009]: time="2025-07-15T04:40:14.607396366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-m95xz,Uid:15d070bc-ad7e-408c-b116-3f22921d91ee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87\"" Jul 15 04:40:14.612890 containerd[2009]: time="2025-07-15T04:40:14.612742644Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 04:40:15.799972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789914052.mount: Deactivated successfully. Jul 15 04:40:15.840967 kubelet[3606]: I0715 04:40:15.840507 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-65rlm" podStartSLOduration=2.840480775 podStartE2EDuration="2.840480775s" podCreationTimestamp="2025-07-15 04:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:14.892675012 +0000 UTC m=+7.364866019" watchObservedRunningTime="2025-07-15 04:40:15.840480775 +0000 UTC m=+8.312671626" Jul 15 04:40:16.574123 containerd[2009]: time="2025-07-15T04:40:16.574064930Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:16.576164 containerd[2009]: time="2025-07-15T04:40:16.576089781Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 04:40:16.578361 containerd[2009]: time="2025-07-15T04:40:16.576531112Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:16.586778 containerd[2009]: time="2025-07-15T04:40:16.586699338Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:16.587629 containerd[2009]: time="2025-07-15T04:40:16.587565772Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.974760699s" Jul 15 04:40:16.587629 containerd[2009]: time="2025-07-15T04:40:16.587624471Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 04:40:16.594259 containerd[2009]: time="2025-07-15T04:40:16.594181231Z" level=info msg="CreateContainer within sandbox \"7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 04:40:16.612115 containerd[2009]: time="2025-07-15T04:40:16.612049172Z" level=info msg="Container 283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:16.618201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953546758.mount: Deactivated successfully. Jul 15 04:40:16.629524 containerd[2009]: time="2025-07-15T04:40:16.629474799Z" level=info msg="CreateContainer within sandbox \"7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34\"" Jul 15 04:40:16.632334 containerd[2009]: time="2025-07-15T04:40:16.631499207Z" level=info msg="StartContainer for \"283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34\"" Jul 15 04:40:16.633657 containerd[2009]: time="2025-07-15T04:40:16.633583668Z" level=info msg="connecting to shim 283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34" address="unix:///run/containerd/s/4ef28b0d5f56edb296b8487fbeef46a9847113657bcbfdbb81b7afc24a44d2ad" protocol=ttrpc version=3 Jul 15 04:40:16.676589 systemd[1]: Started cri-containerd-283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34.scope - libcontainer container 283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34. Jul 15 04:40:16.738759 containerd[2009]: time="2025-07-15T04:40:16.738649969Z" level=info msg="StartContainer for \"283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34\" returns successfully" Jul 15 04:40:23.647989 sudo[2385]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:23.673398 sshd[2384]: Connection closed by 139.178.89.65 port 57594 Jul 15 04:40:23.673224 sshd-session[2381]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:23.684513 systemd[1]: sshd@8-172.31.22.130:22-139.178.89.65:57594.service: Deactivated successfully. Jul 15 04:40:23.692678 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:40:23.693893 systemd[1]: session-9.scope: Consumed 13.547s CPU time, 223.2M memory peak. Jul 15 04:40:23.699574 systemd-logind[1980]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:40:23.706805 systemd-logind[1980]: Removed session 9. Jul 15 04:40:32.922452 kubelet[3606]: I0715 04:40:32.922353 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-m95xz" podStartSLOduration=17.945008174 podStartE2EDuration="19.922307968s" podCreationTimestamp="2025-07-15 04:40:13 +0000 UTC" firstStartedPulling="2025-07-15 04:40:14.612106278 +0000 UTC m=+7.084297129" lastFinishedPulling="2025-07-15 04:40:16.58940606 +0000 UTC m=+9.061596923" observedRunningTime="2025-07-15 04:40:16.901072345 +0000 UTC m=+9.373263304" watchObservedRunningTime="2025-07-15 04:40:32.922307968 +0000 UTC m=+25.394498843" Jul 15 04:40:32.942075 systemd[1]: Created slice kubepods-besteffort-podaeddbed1_1f73_46c7_938b_dda5d7b7eb68.slice - libcontainer container kubepods-besteffort-podaeddbed1_1f73_46c7_938b_dda5d7b7eb68.slice. Jul 15 04:40:32.973158 kubelet[3606]: I0715 04:40:32.973066 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aeddbed1-1f73-46c7-938b-dda5d7b7eb68-typha-certs\") pod \"calico-typha-bc587588c-xt7ms\" (UID: \"aeddbed1-1f73-46c7-938b-dda5d7b7eb68\") " pod="calico-system/calico-typha-bc587588c-xt7ms" Jul 15 04:40:32.973158 kubelet[3606]: I0715 04:40:32.973137 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeddbed1-1f73-46c7-938b-dda5d7b7eb68-tigera-ca-bundle\") pod \"calico-typha-bc587588c-xt7ms\" (UID: \"aeddbed1-1f73-46c7-938b-dda5d7b7eb68\") " pod="calico-system/calico-typha-bc587588c-xt7ms" Jul 15 04:40:32.973410 kubelet[3606]: I0715 04:40:32.973185 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9lpb\" (UniqueName: \"kubernetes.io/projected/aeddbed1-1f73-46c7-938b-dda5d7b7eb68-kube-api-access-v9lpb\") pod \"calico-typha-bc587588c-xt7ms\" (UID: \"aeddbed1-1f73-46c7-938b-dda5d7b7eb68\") " pod="calico-system/calico-typha-bc587588c-xt7ms" Jul 15 04:40:33.254399 containerd[2009]: time="2025-07-15T04:40:33.253631520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bc587588c-xt7ms,Uid:aeddbed1-1f73-46c7-938b-dda5d7b7eb68,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:33.323179 containerd[2009]: time="2025-07-15T04:40:33.323101812Z" level=info msg="connecting to shim fa5022793532e18095d6098f3b4b685ee5cff7ebfcb19cb75b552096323cb438" address="unix:///run/containerd/s/63644ccc2be1e2b0770fd5eabef270f9f7fd2366d009a1026a06fa6edd2892a4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:33.360757 systemd[1]: Created slice kubepods-besteffort-pod880831ad_7cf9_46c3_bc93_5271fbde5555.slice - libcontainer container kubepods-besteffort-pod880831ad_7cf9_46c3_bc93_5271fbde5555.slice. Jul 15 04:40:33.380119 kubelet[3606]: I0715 04:40:33.378476 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-flexvol-driver-host\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380119 kubelet[3606]: I0715 04:40:33.378704 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-policysync\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380119 kubelet[3606]: I0715 04:40:33.378778 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-var-lib-calico\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380119 kubelet[3606]: I0715 04:40:33.378868 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-lib-modules\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380119 kubelet[3606]: I0715 04:40:33.378985 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f87g\" (UniqueName: \"kubernetes.io/projected/880831ad-7cf9-46c3-bc93-5271fbde5555-kube-api-access-8f87g\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380471 kubelet[3606]: I0715 04:40:33.379133 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/880831ad-7cf9-46c3-bc93-5271fbde5555-node-certs\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380471 kubelet[3606]: I0715 04:40:33.379194 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-xtables-lock\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380471 kubelet[3606]: I0715 04:40:33.379452 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-cni-bin-dir\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380471 kubelet[3606]: I0715 04:40:33.379527 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-cni-net-dir\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380471 kubelet[3606]: I0715 04:40:33.380396 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-cni-log-dir\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380770 kubelet[3606]: I0715 04:40:33.380533 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/880831ad-7cf9-46c3-bc93-5271fbde5555-tigera-ca-bundle\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.380770 kubelet[3606]: I0715 04:40:33.380642 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/880831ad-7cf9-46c3-bc93-5271fbde5555-var-run-calico\") pod \"calico-node-nk99r\" (UID: \"880831ad-7cf9-46c3-bc93-5271fbde5555\") " pod="calico-system/calico-node-nk99r" Jul 15 04:40:33.453607 systemd[1]: Started cri-containerd-fa5022793532e18095d6098f3b4b685ee5cff7ebfcb19cb75b552096323cb438.scope - libcontainer container fa5022793532e18095d6098f3b4b685ee5cff7ebfcb19cb75b552096323cb438. Jul 15 04:40:33.496206 kubelet[3606]: E0715 04:40:33.494625 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc67v" podUID="2d26261d-e960-406a-9b63-17a87a2b10d4" Jul 15 04:40:33.512041 kubelet[3606]: E0715 04:40:33.511896 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.512849 kubelet[3606]: W0715 04:40:33.512372 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.513406 kubelet[3606]: E0715 04:40:33.513349 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.523721 kubelet[3606]: E0715 04:40:33.522950 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.523848 kubelet[3606]: W0715 04:40:33.523518 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.523925 kubelet[3606]: E0715 04:40:33.523852 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.525463 kubelet[3606]: E0715 04:40:33.525413 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.525463 kubelet[3606]: W0715 04:40:33.525452 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.525659 kubelet[3606]: E0715 04:40:33.525606 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.527119 kubelet[3606]: E0715 04:40:33.527074 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.527119 kubelet[3606]: W0715 04:40:33.527109 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.527479 kubelet[3606]: E0715 04:40:33.527139 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.528932 kubelet[3606]: E0715 04:40:33.528873 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.529087 kubelet[3606]: W0715 04:40:33.528931 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.529087 kubelet[3606]: E0715 04:40:33.529035 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.529681 kubelet[3606]: E0715 04:40:33.529640 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.530373 kubelet[3606]: W0715 04:40:33.529673 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.530598 kubelet[3606]: E0715 04:40:33.530544 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.531031 kubelet[3606]: E0715 04:40:33.530984 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.531380 kubelet[3606]: W0715 04:40:33.531017 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.532448 kubelet[3606]: E0715 04:40:33.532388 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.532935 kubelet[3606]: E0715 04:40:33.532893 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.532935 kubelet[3606]: W0715 04:40:33.532927 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.533097 kubelet[3606]: E0715 04:40:33.533052 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.533534 kubelet[3606]: E0715 04:40:33.533498 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.533534 kubelet[3606]: W0715 04:40:33.533528 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.533647 kubelet[3606]: E0715 04:40:33.533554 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.571020 kubelet[3606]: E0715 04:40:33.570970 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.571159 kubelet[3606]: W0715 04:40:33.571007 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.571159 kubelet[3606]: E0715 04:40:33.571062 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.572703 kubelet[3606]: E0715 04:40:33.572650 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.572703 kubelet[3606]: W0715 04:40:33.572691 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.572888 kubelet[3606]: E0715 04:40:33.572750 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.573355 kubelet[3606]: E0715 04:40:33.573281 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.573355 kubelet[3606]: W0715 04:40:33.573345 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.574403 kubelet[3606]: E0715 04:40:33.573374 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.575217 kubelet[3606]: E0715 04:40:33.575166 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.575363 kubelet[3606]: W0715 04:40:33.575205 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.575363 kubelet[3606]: E0715 04:40:33.575263 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.576123 kubelet[3606]: E0715 04:40:33.575950 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.576123 kubelet[3606]: W0715 04:40:33.576005 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.576123 kubelet[3606]: E0715 04:40:33.576035 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.577794 kubelet[3606]: E0715 04:40:33.577748 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.577794 kubelet[3606]: W0715 04:40:33.577786 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.577943 kubelet[3606]: E0715 04:40:33.577819 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.578334 kubelet[3606]: E0715 04:40:33.578132 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.578334 kubelet[3606]: W0715 04:40:33.578159 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.578334 kubelet[3606]: E0715 04:40:33.578180 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.578517 kubelet[3606]: E0715 04:40:33.578485 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.578517 kubelet[3606]: W0715 04:40:33.578501 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.578604 kubelet[3606]: E0715 04:40:33.578532 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.580311 kubelet[3606]: E0715 04:40:33.578856 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.580311 kubelet[3606]: W0715 04:40:33.578881 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.580311 kubelet[3606]: E0715 04:40:33.578902 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.580311 kubelet[3606]: E0715 04:40:33.579502 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.580311 kubelet[3606]: W0715 04:40:33.579521 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.580311 kubelet[3606]: E0715 04:40:33.579546 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.580311 kubelet[3606]: E0715 04:40:33.579833 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.580311 kubelet[3606]: W0715 04:40:33.579848 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.580311 kubelet[3606]: E0715 04:40:33.579866 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.580796 kubelet[3606]: E0715 04:40:33.580434 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.580796 kubelet[3606]: W0715 04:40:33.580455 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.580796 kubelet[3606]: E0715 04:40:33.580527 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.581711 kubelet[3606]: E0715 04:40:33.581654 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.581711 kubelet[3606]: W0715 04:40:33.581690 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.581876 kubelet[3606]: E0715 04:40:33.581721 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.582098 kubelet[3606]: E0715 04:40:33.582063 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.582098 kubelet[3606]: W0715 04:40:33.582091 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.582216 kubelet[3606]: E0715 04:40:33.582113 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.582463 kubelet[3606]: E0715 04:40:33.582430 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.582463 kubelet[3606]: W0715 04:40:33.582456 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.582581 kubelet[3606]: E0715 04:40:33.582477 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.582773 kubelet[3606]: E0715 04:40:33.582743 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.582773 kubelet[3606]: W0715 04:40:33.582766 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.582865 kubelet[3606]: E0715 04:40:33.582787 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.584671 kubelet[3606]: E0715 04:40:33.584543 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.584793 kubelet[3606]: W0715 04:40:33.584706 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.584793 kubelet[3606]: E0715 04:40:33.584739 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.585359 kubelet[3606]: E0715 04:40:33.585101 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.585359 kubelet[3606]: W0715 04:40:33.585132 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.585359 kubelet[3606]: E0715 04:40:33.585159 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.586556 kubelet[3606]: E0715 04:40:33.586505 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.586556 kubelet[3606]: W0715 04:40:33.586543 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.586749 kubelet[3606]: E0715 04:40:33.586576 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.589593 kubelet[3606]: E0715 04:40:33.589511 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.589593 kubelet[3606]: W0715 04:40:33.589554 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.589593 kubelet[3606]: E0715 04:40:33.589588 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.590095 kubelet[3606]: E0715 04:40:33.590050 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.590095 kubelet[3606]: W0715 04:40:33.590085 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.590248 kubelet[3606]: E0715 04:40:33.590110 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.590248 kubelet[3606]: I0715 04:40:33.590149 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d26261d-e960-406a-9b63-17a87a2b10d4-kubelet-dir\") pod \"csi-node-driver-gc67v\" (UID: \"2d26261d-e960-406a-9b63-17a87a2b10d4\") " pod="calico-system/csi-node-driver-gc67v" Jul 15 04:40:33.591324 kubelet[3606]: E0715 04:40:33.590459 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.591324 kubelet[3606]: W0715 04:40:33.590489 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.591324 kubelet[3606]: E0715 04:40:33.590513 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.591324 kubelet[3606]: I0715 04:40:33.590544 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d26261d-e960-406a-9b63-17a87a2b10d4-registration-dir\") pod \"csi-node-driver-gc67v\" (UID: \"2d26261d-e960-406a-9b63-17a87a2b10d4\") " pod="calico-system/csi-node-driver-gc67v" Jul 15 04:40:33.591598 kubelet[3606]: E0715 04:40:33.591492 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.591598 kubelet[3606]: W0715 04:40:33.591530 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.591598 kubelet[3606]: E0715 04:40:33.591562 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.591743 kubelet[3606]: I0715 04:40:33.591608 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9nvq\" (UniqueName: \"kubernetes.io/projected/2d26261d-e960-406a-9b63-17a87a2b10d4-kube-api-access-l9nvq\") pod \"csi-node-driver-gc67v\" (UID: \"2d26261d-e960-406a-9b63-17a87a2b10d4\") " pod="calico-system/csi-node-driver-gc67v" Jul 15 04:40:33.592561 kubelet[3606]: E0715 04:40:33.592504 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.592705 kubelet[3606]: W0715 04:40:33.592645 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.592792 kubelet[3606]: E0715 04:40:33.592755 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.595320 kubelet[3606]: E0715 04:40:33.594687 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.595320 kubelet[3606]: W0715 04:40:33.594727 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.595320 kubelet[3606]: E0715 04:40:33.594762 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.595320 kubelet[3606]: E0715 04:40:33.595036 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.595320 kubelet[3606]: W0715 04:40:33.595051 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.595320 kubelet[3606]: E0715 04:40:33.595104 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.595320 kubelet[3606]: I0715 04:40:33.595195 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d26261d-e960-406a-9b63-17a87a2b10d4-socket-dir\") pod \"csi-node-driver-gc67v\" (UID: \"2d26261d-e960-406a-9b63-17a87a2b10d4\") " pod="calico-system/csi-node-driver-gc67v" Jul 15 04:40:33.596124 kubelet[3606]: E0715 04:40:33.596070 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.596262 kubelet[3606]: W0715 04:40:33.596220 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.596739 kubelet[3606]: E0715 04:40:33.596677 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.596739 kubelet[3606]: W0715 04:40:33.596727 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.596892 kubelet[3606]: E0715 04:40:33.596759 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.597049 kubelet[3606]: E0715 04:40:33.597012 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.597107 kubelet[3606]: W0715 04:40:33.597065 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.597107 kubelet[3606]: E0715 04:40:33.597087 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.597389 kubelet[3606]: E0715 04:40:33.597348 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.599123 kubelet[3606]: E0715 04:40:33.599052 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.599252 kubelet[3606]: W0715 04:40:33.599206 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.599703 kubelet[3606]: E0715 04:40:33.599376 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.603347 kubelet[3606]: E0715 04:40:33.602470 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.603347 kubelet[3606]: W0715 04:40:33.602514 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.603347 kubelet[3606]: E0715 04:40:33.602559 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.604190 kubelet[3606]: E0715 04:40:33.604125 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.604190 kubelet[3606]: W0715 04:40:33.604168 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.604368 kubelet[3606]: E0715 04:40:33.604202 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.605060 kubelet[3606]: E0715 04:40:33.605009 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.605060 kubelet[3606]: W0715 04:40:33.605045 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.605245 kubelet[3606]: E0715 04:40:33.605075 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.607672 kubelet[3606]: E0715 04:40:33.607539 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.607672 kubelet[3606]: W0715 04:40:33.607581 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.607672 kubelet[3606]: E0715 04:40:33.607616 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.607672 kubelet[3606]: I0715 04:40:33.607658 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d26261d-e960-406a-9b63-17a87a2b10d4-varrun\") pod \"csi-node-driver-gc67v\" (UID: \"2d26261d-e960-406a-9b63-17a87a2b10d4\") " pod="calico-system/csi-node-driver-gc67v" Jul 15 04:40:33.609713 kubelet[3606]: E0715 04:40:33.609277 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.609713 kubelet[3606]: W0715 04:40:33.609454 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.609713 kubelet[3606]: E0715 04:40:33.609489 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.610840 kubelet[3606]: E0715 04:40:33.610676 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.610840 kubelet[3606]: W0715 04:40:33.610717 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.610840 kubelet[3606]: E0715 04:40:33.610752 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.612199 kubelet[3606]: E0715 04:40:33.612138 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.612199 kubelet[3606]: W0715 04:40:33.612175 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.612409 kubelet[3606]: E0715 04:40:33.612207 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.694314 containerd[2009]: time="2025-07-15T04:40:33.694128011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nk99r,Uid:880831ad-7cf9-46c3-bc93-5271fbde5555,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:33.709414 kubelet[3606]: E0715 04:40:33.709363 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.710735 containerd[2009]: time="2025-07-15T04:40:33.710677380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bc587588c-xt7ms,Uid:aeddbed1-1f73-46c7-938b-dda5d7b7eb68,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa5022793532e18095d6098f3b4b685ee5cff7ebfcb19cb75b552096323cb438\"" Jul 15 04:40:33.711315 kubelet[3606]: W0715 04:40:33.711256 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.711454 kubelet[3606]: E0715 04:40:33.711371 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.714014 kubelet[3606]: E0715 04:40:33.713963 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.714014 kubelet[3606]: W0715 04:40:33.714000 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.714224 kubelet[3606]: E0715 04:40:33.714035 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.716060 containerd[2009]: time="2025-07-15T04:40:33.715673049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 04:40:33.716207 kubelet[3606]: E0715 04:40:33.716131 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.717390 kubelet[3606]: W0715 04:40:33.716157 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.717390 kubelet[3606]: E0715 04:40:33.716376 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.721229 kubelet[3606]: E0715 04:40:33.721170 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.721229 kubelet[3606]: W0715 04:40:33.721210 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.721589 kubelet[3606]: E0715 04:40:33.721328 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.722685 kubelet[3606]: E0715 04:40:33.722630 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.722888 kubelet[3606]: W0715 04:40:33.722670 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.722888 kubelet[3606]: E0715 04:40:33.722746 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.726110 kubelet[3606]: E0715 04:40:33.726036 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.726110 kubelet[3606]: W0715 04:40:33.726107 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.727513 kubelet[3606]: E0715 04:40:33.726153 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.729057 kubelet[3606]: E0715 04:40:33.727708 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.729057 kubelet[3606]: W0715 04:40:33.727734 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.729057 kubelet[3606]: E0715 04:40:33.727795 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.729793 kubelet[3606]: E0715 04:40:33.729628 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.729793 kubelet[3606]: W0715 04:40:33.729739 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.730261 kubelet[3606]: E0715 04:40:33.730195 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.732973 kubelet[3606]: E0715 04:40:33.732863 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.732973 kubelet[3606]: W0715 04:40:33.732920 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.733364 kubelet[3606]: E0715 04:40:33.733270 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.733890 kubelet[3606]: E0715 04:40:33.733862 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.734191 kubelet[3606]: W0715 04:40:33.734007 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.737356 kubelet[3606]: E0715 04:40:33.736782 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.738483 kubelet[3606]: E0715 04:40:33.738125 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.738483 kubelet[3606]: W0715 04:40:33.738156 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.739520 kubelet[3606]: E0715 04:40:33.739441 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.740105 kubelet[3606]: E0715 04:40:33.740054 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.741517 kubelet[3606]: W0715 04:40:33.741396 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.741680 kubelet[3606]: E0715 04:40:33.741602 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.744521 kubelet[3606]: E0715 04:40:33.744457 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.744521 kubelet[3606]: W0715 04:40:33.744499 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.746324 kubelet[3606]: E0715 04:40:33.746253 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.746817 kubelet[3606]: E0715 04:40:33.746500 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.746817 kubelet[3606]: W0715 04:40:33.746531 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.748550 kubelet[3606]: E0715 04:40:33.746872 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.748550 kubelet[3606]: E0715 04:40:33.747794 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.748550 kubelet[3606]: W0715 04:40:33.747827 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.749545 kubelet[3606]: E0715 04:40:33.749497 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.749545 kubelet[3606]: W0715 04:40:33.749535 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.751344 kubelet[3606]: E0715 04:40:33.750689 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.751344 kubelet[3606]: E0715 04:40:33.750867 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.751846 kubelet[3606]: E0715 04:40:33.751606 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.751846 kubelet[3606]: W0715 04:40:33.751633 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.752130 kubelet[3606]: E0715 04:40:33.752013 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.754768 kubelet[3606]: E0715 04:40:33.754659 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.754768 kubelet[3606]: W0715 04:40:33.754699 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.757113 kubelet[3606]: E0715 04:40:33.757057 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.757520 kubelet[3606]: E0715 04:40:33.757481 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.757520 kubelet[3606]: W0715 04:40:33.757513 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.758012 kubelet[3606]: E0715 04:40:33.757967 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.759584 kubelet[3606]: E0715 04:40:33.759531 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.759584 kubelet[3606]: W0715 04:40:33.759571 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.760713 kubelet[3606]: E0715 04:40:33.760616 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.762353 kubelet[3606]: E0715 04:40:33.761723 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.762353 kubelet[3606]: W0715 04:40:33.761752 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.765545 kubelet[3606]: E0715 04:40:33.763254 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.765545 kubelet[3606]: E0715 04:40:33.763865 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.765545 kubelet[3606]: W0715 04:40:33.763889 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.765545 kubelet[3606]: E0715 04:40:33.764003 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.765545 kubelet[3606]: E0715 04:40:33.765123 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.765545 kubelet[3606]: W0715 04:40:33.765148 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.765545 kubelet[3606]: E0715 04:40:33.765415 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.767333 kubelet[3606]: E0715 04:40:33.766140 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.767333 kubelet[3606]: W0715 04:40:33.766179 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.767333 kubelet[3606]: E0715 04:40:33.766217 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.768633 kubelet[3606]: E0715 04:40:33.767857 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.768633 kubelet[3606]: W0715 04:40:33.767919 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.768633 kubelet[3606]: E0715 04:40:33.767953 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.794012 containerd[2009]: time="2025-07-15T04:40:33.793792370Z" level=info msg="connecting to shim 1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f" address="unix:///run/containerd/s/2c04a19ec7a19b49c99a48df2103f0f9d9d45674886428ce73220e365e42392a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:33.842638 kubelet[3606]: E0715 04:40:33.842586 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.842638 kubelet[3606]: W0715 04:40:33.842626 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.842839 kubelet[3606]: E0715 04:40:33.842661 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.853162 systemd[1]: Started cri-containerd-1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f.scope - libcontainer container 1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f. Jul 15 04:40:33.882411 kubelet[3606]: E0715 04:40:33.882362 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:33.882411 kubelet[3606]: W0715 04:40:33.882401 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:33.882581 kubelet[3606]: E0715 04:40:33.882433 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:33.960221 containerd[2009]: time="2025-07-15T04:40:33.960129157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nk99r,Uid:880831ad-7cf9-46c3-bc93-5271fbde5555,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f\"" Jul 15 04:40:34.775042 kubelet[3606]: E0715 04:40:34.774952 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc67v" podUID="2d26261d-e960-406a-9b63-17a87a2b10d4" Jul 15 04:40:35.049271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849677953.mount: Deactivated successfully. Jul 15 04:40:36.527870 containerd[2009]: time="2025-07-15T04:40:36.527810250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:36.529591 containerd[2009]: time="2025-07-15T04:40:36.529548720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 04:40:36.530084 containerd[2009]: time="2025-07-15T04:40:36.530009026Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:36.533464 containerd[2009]: time="2025-07-15T04:40:36.533386080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:36.534862 containerd[2009]: time="2025-07-15T04:40:36.534687753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.818954722s" Jul 15 04:40:36.534862 containerd[2009]: time="2025-07-15T04:40:36.534735489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 04:40:36.537659 containerd[2009]: time="2025-07-15T04:40:36.537612322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 04:40:36.573928 containerd[2009]: time="2025-07-15T04:40:36.573829285Z" level=info msg="CreateContainer within sandbox \"fa5022793532e18095d6098f3b4b685ee5cff7ebfcb19cb75b552096323cb438\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 04:40:36.586515 containerd[2009]: time="2025-07-15T04:40:36.585772730Z" level=info msg="Container 70865e834c4e9450aee051bab3b3352f73ce9cd1d65445e8b50309806a0a6816: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:36.608906 containerd[2009]: time="2025-07-15T04:40:36.608823603Z" level=info msg="CreateContainer within sandbox \"fa5022793532e18095d6098f3b4b685ee5cff7ebfcb19cb75b552096323cb438\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"70865e834c4e9450aee051bab3b3352f73ce9cd1d65445e8b50309806a0a6816\"" Jul 15 04:40:36.609836 containerd[2009]: time="2025-07-15T04:40:36.609783051Z" level=info msg="StartContainer for \"70865e834c4e9450aee051bab3b3352f73ce9cd1d65445e8b50309806a0a6816\"" Jul 15 04:40:36.613991 containerd[2009]: time="2025-07-15T04:40:36.613928981Z" level=info msg="connecting to shim 70865e834c4e9450aee051bab3b3352f73ce9cd1d65445e8b50309806a0a6816" address="unix:///run/containerd/s/63644ccc2be1e2b0770fd5eabef270f9f7fd2366d009a1026a06fa6edd2892a4" protocol=ttrpc version=3 Jul 15 04:40:36.657489 systemd[1]: Started cri-containerd-70865e834c4e9450aee051bab3b3352f73ce9cd1d65445e8b50309806a0a6816.scope - libcontainer container 70865e834c4e9450aee051bab3b3352f73ce9cd1d65445e8b50309806a0a6816. Jul 15 04:40:36.758123 containerd[2009]: time="2025-07-15T04:40:36.757718091Z" level=info msg="StartContainer for \"70865e834c4e9450aee051bab3b3352f73ce9cd1d65445e8b50309806a0a6816\" returns successfully" Jul 15 04:40:36.775309 kubelet[3606]: E0715 04:40:36.775095 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc67v" podUID="2d26261d-e960-406a-9b63-17a87a2b10d4" Jul 15 04:40:37.024866 kubelet[3606]: E0715 04:40:37.024806 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.025092 kubelet[3606]: W0715 04:40:37.024839 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.025092 kubelet[3606]: E0715 04:40:37.025039 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.027623 kubelet[3606]: E0715 04:40:37.027552 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.027623 kubelet[3606]: W0715 04:40:37.027593 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.027623 kubelet[3606]: E0715 04:40:37.027628 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.028181 kubelet[3606]: E0715 04:40:37.028141 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.028181 kubelet[3606]: W0715 04:40:37.028172 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.028329 kubelet[3606]: E0715 04:40:37.028198 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.029012 kubelet[3606]: E0715 04:40:37.028945 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.029012 kubelet[3606]: W0715 04:40:37.028983 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.029012 kubelet[3606]: E0715 04:40:37.029014 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.031910 kubelet[3606]: E0715 04:40:37.031459 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.031910 kubelet[3606]: W0715 04:40:37.031500 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.031910 kubelet[3606]: E0715 04:40:37.031532 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.032865 kubelet[3606]: E0715 04:40:37.032809 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.032865 kubelet[3606]: W0715 04:40:37.032842 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.033128 kubelet[3606]: E0715 04:40:37.032877 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.034569 kubelet[3606]: E0715 04:40:37.034442 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.034569 kubelet[3606]: W0715 04:40:37.034480 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.034569 kubelet[3606]: E0715 04:40:37.034512 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.035168 kubelet[3606]: E0715 04:40:37.035009 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.035168 kubelet[3606]: W0715 04:40:37.035032 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.035168 kubelet[3606]: E0715 04:40:37.035057 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.036485 kubelet[3606]: E0715 04:40:37.036435 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.036485 kubelet[3606]: W0715 04:40:37.036470 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.037537 kubelet[3606]: E0715 04:40:37.036504 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.037537 kubelet[3606]: E0715 04:40:37.037523 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.037856 kubelet[3606]: W0715 04:40:37.037551 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.037856 kubelet[3606]: E0715 04:40:37.037584 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.038224 kubelet[3606]: E0715 04:40:37.038172 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.038224 kubelet[3606]: W0715 04:40:37.038205 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.039400 kubelet[3606]: E0715 04:40:37.038232 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.039670 kubelet[3606]: E0715 04:40:37.039442 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.039670 kubelet[3606]: W0715 04:40:37.039473 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.039670 kubelet[3606]: E0715 04:40:37.039505 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.039940 kubelet[3606]: E0715 04:40:37.039889 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.039940 kubelet[3606]: W0715 04:40:37.039909 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.039940 kubelet[3606]: E0715 04:40:37.039929 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.040274 kubelet[3606]: E0715 04:40:37.040184 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.040274 kubelet[3606]: W0715 04:40:37.040200 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.040274 kubelet[3606]: E0715 04:40:37.040218 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.040568 kubelet[3606]: E0715 04:40:37.040509 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.040568 kubelet[3606]: W0715 04:40:37.040525 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.040568 kubelet[3606]: E0715 04:40:37.040547 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.074752 kubelet[3606]: E0715 04:40:37.074529 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.074752 kubelet[3606]: W0715 04:40:37.074576 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.074752 kubelet[3606]: E0715 04:40:37.074608 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.075809 kubelet[3606]: E0715 04:40:37.075770 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.075809 kubelet[3606]: W0715 04:40:37.075798 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.075942 kubelet[3606]: E0715 04:40:37.075829 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.077784 kubelet[3606]: E0715 04:40:37.077734 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.077784 kubelet[3606]: W0715 04:40:37.077772 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.078417 kubelet[3606]: E0715 04:40:37.077817 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.078815 kubelet[3606]: E0715 04:40:37.078767 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.078815 kubelet[3606]: W0715 04:40:37.078806 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.079056 kubelet[3606]: E0715 04:40:37.078925 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.079631 kubelet[3606]: E0715 04:40:37.079588 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.079631 kubelet[3606]: W0715 04:40:37.079621 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.080353 kubelet[3606]: E0715 04:40:37.080258 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.080956 kubelet[3606]: E0715 04:40:37.080561 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.080956 kubelet[3606]: W0715 04:40:37.080931 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.081261 kubelet[3606]: E0715 04:40:37.081096 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.083605 kubelet[3606]: E0715 04:40:37.083517 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.083885 kubelet[3606]: W0715 04:40:37.083821 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.084043 kubelet[3606]: E0715 04:40:37.083996 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.085632 kubelet[3606]: E0715 04:40:37.085517 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.085632 kubelet[3606]: W0715 04:40:37.085566 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.085961 kubelet[3606]: E0715 04:40:37.085910 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.085961 kubelet[3606]: E0715 04:40:37.085940 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.085961 kubelet[3606]: W0715 04:40:37.085956 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.086397 kubelet[3606]: E0715 04:40:37.086354 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.087487 kubelet[3606]: E0715 04:40:37.087433 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.087487 kubelet[3606]: W0715 04:40:37.087474 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.087749 kubelet[3606]: E0715 04:40:37.087693 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.087917 kubelet[3606]: E0715 04:40:37.087885 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.087917 kubelet[3606]: W0715 04:40:37.087910 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.088164 kubelet[3606]: E0715 04:40:37.088126 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.088311 kubelet[3606]: E0715 04:40:37.088269 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.088475 kubelet[3606]: W0715 04:40:37.088435 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.088475 kubelet[3606]: E0715 04:40:37.088504 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.089950 kubelet[3606]: E0715 04:40:37.089892 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.090089 kubelet[3606]: W0715 04:40:37.089937 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.090162 kubelet[3606]: E0715 04:40:37.090124 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.094088 kubelet[3606]: E0715 04:40:37.093431 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.094088 kubelet[3606]: W0715 04:40:37.093471 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.094088 kubelet[3606]: E0715 04:40:37.093521 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.094865 kubelet[3606]: E0715 04:40:37.094811 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.095457 kubelet[3606]: W0715 04:40:37.094854 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.095599 kubelet[3606]: E0715 04:40:37.095501 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.097767 kubelet[3606]: E0715 04:40:37.097525 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.097767 kubelet[3606]: W0715 04:40:37.097565 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.097767 kubelet[3606]: E0715 04:40:37.097614 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.098306 kubelet[3606]: E0715 04:40:37.098168 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.098306 kubelet[3606]: W0715 04:40:37.098208 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.098438 kubelet[3606]: E0715 04:40:37.098282 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.099505 kubelet[3606]: E0715 04:40:37.098759 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:37.099505 kubelet[3606]: W0715 04:40:37.098796 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:37.099505 kubelet[3606]: E0715 04:40:37.098826 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:37.728844 containerd[2009]: time="2025-07-15T04:40:37.728729173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:37.731328 containerd[2009]: time="2025-07-15T04:40:37.730853539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 04:40:37.733325 containerd[2009]: time="2025-07-15T04:40:37.732501934Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:37.740067 containerd[2009]: time="2025-07-15T04:40:37.739999431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:37.743792 containerd[2009]: time="2025-07-15T04:40:37.743731317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.205910503s" Jul 15 04:40:37.745332 containerd[2009]: time="2025-07-15T04:40:37.744002525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 04:40:37.752788 containerd[2009]: time="2025-07-15T04:40:37.752721755Z" level=info msg="CreateContainer within sandbox \"1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 04:40:37.776832 containerd[2009]: time="2025-07-15T04:40:37.776712370Z" level=info msg="Container 5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:37.798602 containerd[2009]: time="2025-07-15T04:40:37.798508251Z" level=info msg="CreateContainer within sandbox \"1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e\"" Jul 15 04:40:37.799890 containerd[2009]: time="2025-07-15T04:40:37.799795363Z" level=info msg="StartContainer for \"5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e\"" Jul 15 04:40:37.804417 containerd[2009]: time="2025-07-15T04:40:37.804217011Z" level=info msg="connecting to shim 5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e" address="unix:///run/containerd/s/2c04a19ec7a19b49c99a48df2103f0f9d9d45674886428ce73220e365e42392a" protocol=ttrpc version=3 Jul 15 04:40:37.868694 systemd[1]: Started cri-containerd-5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e.scope - libcontainer container 5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e. Jul 15 04:40:38.010766 kubelet[3606]: I0715 04:40:38.010575 3606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:40:38.023991 containerd[2009]: time="2025-07-15T04:40:38.023922492Z" level=info msg="StartContainer for \"5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e\" returns successfully" Jul 15 04:40:38.049364 kubelet[3606]: E0715 04:40:38.049142 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.049364 kubelet[3606]: W0715 04:40:38.049173 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.049364 kubelet[3606]: E0715 04:40:38.049202 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.050316 kubelet[3606]: E0715 04:40:38.050163 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.050316 kubelet[3606]: W0715 04:40:38.050199 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.050316 kubelet[3606]: E0715 04:40:38.050247 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.051626 kubelet[3606]: E0715 04:40:38.051270 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.051626 kubelet[3606]: W0715 04:40:38.051344 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.051626 kubelet[3606]: E0715 04:40:38.051376 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.052265 kubelet[3606]: E0715 04:40:38.052222 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.052265 kubelet[3606]: W0715 04:40:38.052256 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.052461 kubelet[3606]: E0715 04:40:38.052305 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.053164 kubelet[3606]: E0715 04:40:38.053116 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.053421 kubelet[3606]: W0715 04:40:38.053372 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.053623 kubelet[3606]: E0715 04:40:38.053441 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.054441 kubelet[3606]: E0715 04:40:38.054395 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.054441 kubelet[3606]: W0715 04:40:38.054432 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.054587 kubelet[3606]: E0715 04:40:38.054463 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.055343 kubelet[3606]: E0715 04:40:38.055275 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.055343 kubelet[3606]: W0715 04:40:38.055332 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.055546 kubelet[3606]: E0715 04:40:38.055364 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.056358 kubelet[3606]: E0715 04:40:38.056252 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.056358 kubelet[3606]: W0715 04:40:38.056350 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.056499 kubelet[3606]: E0715 04:40:38.056389 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.057852 kubelet[3606]: E0715 04:40:38.057799 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.057852 kubelet[3606]: W0715 04:40:38.057840 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.058070 kubelet[3606]: E0715 04:40:38.057873 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.058909 kubelet[3606]: E0715 04:40:38.058858 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.058909 kubelet[3606]: W0715 04:40:38.058896 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.059522 kubelet[3606]: E0715 04:40:38.058928 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.059907 kubelet[3606]: E0715 04:40:38.059861 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.059907 kubelet[3606]: W0715 04:40:38.059898 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.060074 kubelet[3606]: E0715 04:40:38.059930 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.060948 kubelet[3606]: E0715 04:40:38.060901 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.060948 kubelet[3606]: W0715 04:40:38.060937 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.061145 kubelet[3606]: E0715 04:40:38.060968 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.061916 kubelet[3606]: E0715 04:40:38.061870 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.061916 kubelet[3606]: W0715 04:40:38.061907 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.062089 kubelet[3606]: E0715 04:40:38.061938 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.062918 kubelet[3606]: E0715 04:40:38.062873 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.062918 kubelet[3606]: W0715 04:40:38.062909 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.063076 kubelet[3606]: E0715 04:40:38.062940 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.063764 kubelet[3606]: E0715 04:40:38.063718 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.063764 kubelet[3606]: W0715 04:40:38.063753 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.063915 kubelet[3606]: E0715 04:40:38.063784 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.085183 kubelet[3606]: E0715 04:40:38.085115 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.085593 kubelet[3606]: W0715 04:40:38.085436 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.085593 kubelet[3606]: E0715 04:40:38.085537 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.088686 kubelet[3606]: E0715 04:40:38.088615 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.088871 kubelet[3606]: W0715 04:40:38.088650 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.089335 kubelet[3606]: E0715 04:40:38.089228 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.089576 kubelet[3606]: E0715 04:40:38.089553 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.089741 kubelet[3606]: W0715 04:40:38.089696 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.089845 kubelet[3606]: E0715 04:40:38.089824 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.090535 kubelet[3606]: E0715 04:40:38.090472 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.090535 kubelet[3606]: W0715 04:40:38.090503 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.090765 kubelet[3606]: E0715 04:40:38.090683 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.091187 kubelet[3606]: E0715 04:40:38.091159 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.091368 kubelet[3606]: W0715 04:40:38.091342 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.091508 kubelet[3606]: E0715 04:40:38.091487 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.092103 kubelet[3606]: E0715 04:40:38.092048 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.092103 kubelet[3606]: W0715 04:40:38.092073 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.092503 kubelet[3606]: E0715 04:40:38.092275 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.093114 kubelet[3606]: E0715 04:40:38.093055 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.093114 kubelet[3606]: W0715 04:40:38.093083 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.093335 kubelet[3606]: E0715 04:40:38.093252 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.094074 kubelet[3606]: E0715 04:40:38.093975 3606 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:40:38.094074 kubelet[3606]: W0715 04:40:38.094005 3606 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:40:38.094609 kubelet[3606]: E0715 04:40:38.094424 3606 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:40:38.098707 systemd[1]: cri-containerd-5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e.scope: Deactivated successfully. Jul 15 04:40:38.114866 containerd[2009]: time="2025-07-15T04:40:38.114641846Z" level=info msg="received exit event container_id:\"5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e\" id:\"5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e\" pid:4297 exited_at:{seconds:1752554438 nanos:112799651}" Jul 15 04:40:38.114993 containerd[2009]: time="2025-07-15T04:40:38.114731993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e\" id:\"5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e\" pid:4297 exited_at:{seconds:1752554438 nanos:112799651}" Jul 15 04:40:38.173696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e31a930563ab2f6d2476ce86922ad00189f14e72659cf3ef3911080555ec75e-rootfs.mount: Deactivated successfully. Jul 15 04:40:38.774392 kubelet[3606]: E0715 04:40:38.774320 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc67v" podUID="2d26261d-e960-406a-9b63-17a87a2b10d4" Jul 15 04:40:39.021583 containerd[2009]: time="2025-07-15T04:40:39.021329826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 04:40:39.055938 kubelet[3606]: I0715 04:40:39.055446 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bc587588c-xt7ms" podStartSLOduration=4.233743108 podStartE2EDuration="7.05541916s" podCreationTimestamp="2025-07-15 04:40:32 +0000 UTC" firstStartedPulling="2025-07-15 04:40:33.714928798 +0000 UTC m=+26.187119661" lastFinishedPulling="2025-07-15 04:40:36.536604862 +0000 UTC m=+29.008795713" observedRunningTime="2025-07-15 04:40:37.076992075 +0000 UTC m=+29.549182974" watchObservedRunningTime="2025-07-15 04:40:39.05541916 +0000 UTC m=+31.527610035" Jul 15 04:40:40.774978 kubelet[3606]: E0715 04:40:40.774775 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc67v" podUID="2d26261d-e960-406a-9b63-17a87a2b10d4" Jul 15 04:40:41.901963 containerd[2009]: time="2025-07-15T04:40:41.901909780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:41.903772 containerd[2009]: time="2025-07-15T04:40:41.903730877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 04:40:41.904136 containerd[2009]: time="2025-07-15T04:40:41.904100736Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:41.907943 containerd[2009]: time="2025-07-15T04:40:41.907875775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:41.909367 containerd[2009]: time="2025-07-15T04:40:41.909317958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.887898369s" Jul 15 04:40:41.909634 containerd[2009]: time="2025-07-15T04:40:41.909499547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 04:40:41.917352 containerd[2009]: time="2025-07-15T04:40:41.917191107Z" level=info msg="CreateContainer within sandbox \"1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 04:40:41.933612 containerd[2009]: time="2025-07-15T04:40:41.933539049Z" level=info msg="Container 7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:41.942069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676219902.mount: Deactivated successfully. Jul 15 04:40:41.951577 containerd[2009]: time="2025-07-15T04:40:41.951498757Z" level=info msg="CreateContainer within sandbox \"1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938\"" Jul 15 04:40:41.953338 containerd[2009]: time="2025-07-15T04:40:41.952596304Z" level=info msg="StartContainer for \"7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938\"" Jul 15 04:40:41.957819 containerd[2009]: time="2025-07-15T04:40:41.957504213Z" level=info msg="connecting to shim 7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938" address="unix:///run/containerd/s/2c04a19ec7a19b49c99a48df2103f0f9d9d45674886428ce73220e365e42392a" protocol=ttrpc version=3 Jul 15 04:40:42.000588 systemd[1]: Started cri-containerd-7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938.scope - libcontainer container 7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938. Jul 15 04:40:42.097968 containerd[2009]: time="2025-07-15T04:40:42.097913521Z" level=info msg="StartContainer for \"7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938\" returns successfully" Jul 15 04:40:42.775441 kubelet[3606]: E0715 04:40:42.775378 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc67v" podUID="2d26261d-e960-406a-9b63-17a87a2b10d4" Jul 15 04:40:43.133926 systemd[1]: cri-containerd-7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938.scope: Deactivated successfully. Jul 15 04:40:43.134472 systemd[1]: cri-containerd-7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938.scope: Consumed 933ms CPU time, 184.6M memory peak, 165.8M written to disk. Jul 15 04:40:43.139638 containerd[2009]: time="2025-07-15T04:40:43.139558098Z" level=info msg="received exit event container_id:\"7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938\" id:\"7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938\" pid:4384 exited_at:{seconds:1752554443 nanos:139174901}" Jul 15 04:40:43.140974 containerd[2009]: time="2025-07-15T04:40:43.140896988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938\" id:\"7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938\" pid:4384 exited_at:{seconds:1752554443 nanos:139174901}" Jul 15 04:40:43.181208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7697f7b181d587b40f5e67608977d440613e6d7d5a90c5746d70952dc45b8938-rootfs.mount: Deactivated successfully. Jul 15 04:40:43.187574 kubelet[3606]: I0715 04:40:43.187532 3606 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 04:40:43.276407 systemd[1]: Created slice kubepods-burstable-pod32c70bee_38cd_40a3_a3d2_1c66fbc2b353.slice - libcontainer container kubepods-burstable-pod32c70bee_38cd_40a3_a3d2_1c66fbc2b353.slice. Jul 15 04:40:43.306196 systemd[1]: Created slice kubepods-burstable-podeb495a5a_d4bd_4828_9ddf_c81a6f55aa08.slice - libcontainer container kubepods-burstable-podeb495a5a_d4bd_4828_9ddf_c81a6f55aa08.slice. Jul 15 04:40:43.329975 kubelet[3606]: I0715 04:40:43.329842 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445js\" (UniqueName: \"kubernetes.io/projected/eb495a5a-d4bd-4828-9ddf-c81a6f55aa08-kube-api-access-445js\") pod \"coredns-7c65d6cfc9-6lqdq\" (UID: \"eb495a5a-d4bd-4828-9ddf-c81a6f55aa08\") " pod="kube-system/coredns-7c65d6cfc9-6lqdq" Jul 15 04:40:43.329975 kubelet[3606]: I0715 04:40:43.329909 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skg7m\" (UniqueName: \"kubernetes.io/projected/bb3d31a1-cb17-4c13-8521-539eb31aad4e-kube-api-access-skg7m\") pod \"calico-apiserver-6ddc7c5889-jr9jp\" (UID: \"bb3d31a1-cb17-4c13-8521-539eb31aad4e\") " pod="calico-apiserver/calico-apiserver-6ddc7c5889-jr9jp" Jul 15 04:40:43.329975 kubelet[3606]: I0715 04:40:43.329951 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b-calico-apiserver-certs\") pod \"calico-apiserver-6ddc7c5889-jb4bp\" (UID: \"8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b\") " pod="calico-apiserver/calico-apiserver-6ddc7c5889-jb4bp" Jul 15 04:40:43.330307 kubelet[3606]: I0715 04:40:43.329994 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5csmb\" (UniqueName: \"kubernetes.io/projected/8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b-kube-api-access-5csmb\") pod \"calico-apiserver-6ddc7c5889-jb4bp\" (UID: \"8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b\") " pod="calico-apiserver/calico-apiserver-6ddc7c5889-jb4bp" Jul 15 04:40:43.330307 kubelet[3606]: I0715 04:40:43.330034 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wv27\" (UniqueName: \"kubernetes.io/projected/c2fbd100-bc42-468d-a8e5-9c4e6ab30466-kube-api-access-2wv27\") pod \"calico-kube-controllers-5fc5d88d78-wgx85\" (UID: \"c2fbd100-bc42-468d-a8e5-9c4e6ab30466\") " pod="calico-system/calico-kube-controllers-5fc5d88d78-wgx85" Jul 15 04:40:43.330307 kubelet[3606]: I0715 04:40:43.330078 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb495a5a-d4bd-4828-9ddf-c81a6f55aa08-config-volume\") pod \"coredns-7c65d6cfc9-6lqdq\" (UID: \"eb495a5a-d4bd-4828-9ddf-c81a6f55aa08\") " pod="kube-system/coredns-7c65d6cfc9-6lqdq" Jul 15 04:40:43.330307 kubelet[3606]: I0715 04:40:43.330118 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32c70bee-38cd-40a3-a3d2-1c66fbc2b353-config-volume\") pod \"coredns-7c65d6cfc9-9nrk2\" (UID: \"32c70bee-38cd-40a3-a3d2-1c66fbc2b353\") " pod="kube-system/coredns-7c65d6cfc9-9nrk2" Jul 15 04:40:43.330307 kubelet[3606]: I0715 04:40:43.330160 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7htk\" (UniqueName: \"kubernetes.io/projected/32c70bee-38cd-40a3-a3d2-1c66fbc2b353-kube-api-access-m7htk\") pod \"coredns-7c65d6cfc9-9nrk2\" (UID: \"32c70bee-38cd-40a3-a3d2-1c66fbc2b353\") " pod="kube-system/coredns-7c65d6cfc9-9nrk2" Jul 15 04:40:43.330603 kubelet[3606]: I0715 04:40:43.330201 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2fbd100-bc42-468d-a8e5-9c4e6ab30466-tigera-ca-bundle\") pod \"calico-kube-controllers-5fc5d88d78-wgx85\" (UID: \"c2fbd100-bc42-468d-a8e5-9c4e6ab30466\") " pod="calico-system/calico-kube-controllers-5fc5d88d78-wgx85" Jul 15 04:40:43.330603 kubelet[3606]: I0715 04:40:43.330239 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bb3d31a1-cb17-4c13-8521-539eb31aad4e-calico-apiserver-certs\") pod \"calico-apiserver-6ddc7c5889-jr9jp\" (UID: \"bb3d31a1-cb17-4c13-8521-539eb31aad4e\") " pod="calico-apiserver/calico-apiserver-6ddc7c5889-jr9jp" Jul 15 04:40:43.332234 systemd[1]: Created slice kubepods-besteffort-pod8fb07b0b_58f7_4bdc_8a7f_62087e15fe4b.slice - libcontainer container kubepods-besteffort-pod8fb07b0b_58f7_4bdc_8a7f_62087e15fe4b.slice. Jul 15 04:40:43.352187 systemd[1]: Created slice kubepods-besteffort-podbb3d31a1_cb17_4c13_8521_539eb31aad4e.slice - libcontainer container kubepods-besteffort-podbb3d31a1_cb17_4c13_8521_539eb31aad4e.slice. Jul 15 04:40:43.372990 systemd[1]: Created slice kubepods-besteffort-podc2fbd100_bc42_468d_a8e5_9c4e6ab30466.slice - libcontainer container kubepods-besteffort-podc2fbd100_bc42_468d_a8e5_9c4e6ab30466.slice. Jul 15 04:40:43.392363 systemd[1]: Created slice kubepods-besteffort-pod599f0608_2596_480c_b12f_3aa0503a3b0c.slice - libcontainer container kubepods-besteffort-pod599f0608_2596_480c_b12f_3aa0503a3b0c.slice. Jul 15 04:40:43.409105 systemd[1]: Created slice kubepods-besteffort-podee258c44_6978_4e37_8cd2_4a604aaa3124.slice - libcontainer container kubepods-besteffort-podee258c44_6978_4e37_8cd2_4a604aaa3124.slice. Jul 15 04:40:43.431741 kubelet[3606]: I0715 04:40:43.430949 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd689\" (UniqueName: \"kubernetes.io/projected/599f0608-2596-480c-b12f-3aa0503a3b0c-kube-api-access-sd689\") pod \"goldmane-58fd7646b9-hx5c6\" (UID: \"599f0608-2596-480c-b12f-3aa0503a3b0c\") " pod="calico-system/goldmane-58fd7646b9-hx5c6" Jul 15 04:40:43.431741 kubelet[3606]: I0715 04:40:43.431035 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-ca-bundle\") pod \"whisker-fdd4b7bb6-ddxkm\" (UID: \"ee258c44-6978-4e37-8cd2-4a604aaa3124\") " pod="calico-system/whisker-fdd4b7bb6-ddxkm" Jul 15 04:40:43.431741 kubelet[3606]: I0715 04:40:43.431131 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599f0608-2596-480c-b12f-3aa0503a3b0c-config\") pod \"goldmane-58fd7646b9-hx5c6\" (UID: \"599f0608-2596-480c-b12f-3aa0503a3b0c\") " pod="calico-system/goldmane-58fd7646b9-hx5c6" Jul 15 04:40:43.431741 kubelet[3606]: I0715 04:40:43.431173 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/599f0608-2596-480c-b12f-3aa0503a3b0c-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-hx5c6\" (UID: \"599f0608-2596-480c-b12f-3aa0503a3b0c\") " pod="calico-system/goldmane-58fd7646b9-hx5c6" Jul 15 04:40:43.431741 kubelet[3606]: I0715 04:40:43.431257 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrz4\" (UniqueName: \"kubernetes.io/projected/ee258c44-6978-4e37-8cd2-4a604aaa3124-kube-api-access-lnrz4\") pod \"whisker-fdd4b7bb6-ddxkm\" (UID: \"ee258c44-6978-4e37-8cd2-4a604aaa3124\") " pod="calico-system/whisker-fdd4b7bb6-ddxkm" Jul 15 04:40:43.432150 kubelet[3606]: I0715 04:40:43.431344 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/599f0608-2596-480c-b12f-3aa0503a3b0c-goldmane-key-pair\") pod \"goldmane-58fd7646b9-hx5c6\" (UID: \"599f0608-2596-480c-b12f-3aa0503a3b0c\") " pod="calico-system/goldmane-58fd7646b9-hx5c6" Jul 15 04:40:43.432657 kubelet[3606]: I0715 04:40:43.432583 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-backend-key-pair\") pod \"whisker-fdd4b7bb6-ddxkm\" (UID: \"ee258c44-6978-4e37-8cd2-4a604aaa3124\") " pod="calico-system/whisker-fdd4b7bb6-ddxkm" Jul 15 04:40:43.594632 containerd[2009]: time="2025-07-15T04:40:43.594570975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9nrk2,Uid:32c70bee-38cd-40a3-a3d2-1c66fbc2b353,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:43.635161 containerd[2009]: time="2025-07-15T04:40:43.634957723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6lqdq,Uid:eb495a5a-d4bd-4828-9ddf-c81a6f55aa08,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:43.652150 containerd[2009]: time="2025-07-15T04:40:43.651859620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jb4bp,Uid:8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:43.672225 containerd[2009]: time="2025-07-15T04:40:43.671199082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jr9jp,Uid:bb3d31a1-cb17-4c13-8521-539eb31aad4e,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:43.682212 containerd[2009]: time="2025-07-15T04:40:43.682056854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc5d88d78-wgx85,Uid:c2fbd100-bc42-468d-a8e5-9c4e6ab30466,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:43.701327 containerd[2009]: time="2025-07-15T04:40:43.701190894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hx5c6,Uid:599f0608-2596-480c-b12f-3aa0503a3b0c,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:43.719847 containerd[2009]: time="2025-07-15T04:40:43.719707256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fdd4b7bb6-ddxkm,Uid:ee258c44-6978-4e37-8cd2-4a604aaa3124,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:44.026640 containerd[2009]: time="2025-07-15T04:40:44.025861591Z" level=error msg="Failed to destroy network for sandbox \"75ce2394d5814c01319607b2a23656ccd669e813e5aeaa7ac75e4ca77ee2589b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.031399 containerd[2009]: time="2025-07-15T04:40:44.030886081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fdd4b7bb6-ddxkm,Uid:ee258c44-6978-4e37-8cd2-4a604aaa3124,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75ce2394d5814c01319607b2a23656ccd669e813e5aeaa7ac75e4ca77ee2589b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.032557 kubelet[3606]: E0715 04:40:44.032049 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75ce2394d5814c01319607b2a23656ccd669e813e5aeaa7ac75e4ca77ee2589b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.032557 kubelet[3606]: E0715 04:40:44.032158 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75ce2394d5814c01319607b2a23656ccd669e813e5aeaa7ac75e4ca77ee2589b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fdd4b7bb6-ddxkm" Jul 15 04:40:44.032557 kubelet[3606]: E0715 04:40:44.032190 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75ce2394d5814c01319607b2a23656ccd669e813e5aeaa7ac75e4ca77ee2589b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fdd4b7bb6-ddxkm" Jul 15 04:40:44.034169 kubelet[3606]: E0715 04:40:44.032254 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fdd4b7bb6-ddxkm_calico-system(ee258c44-6978-4e37-8cd2-4a604aaa3124)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fdd4b7bb6-ddxkm_calico-system(ee258c44-6978-4e37-8cd2-4a604aaa3124)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75ce2394d5814c01319607b2a23656ccd669e813e5aeaa7ac75e4ca77ee2589b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fdd4b7bb6-ddxkm" podUID="ee258c44-6978-4e37-8cd2-4a604aaa3124" Jul 15 04:40:44.040010 containerd[2009]: time="2025-07-15T04:40:44.039941527Z" level=error msg="Failed to destroy network for sandbox \"5505344aba4a44a43ebbeefd2090005361a3d25278cb66ddf0418eee25de21bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.044332 containerd[2009]: time="2025-07-15T04:40:44.043790450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jb4bp,Uid:8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5505344aba4a44a43ebbeefd2090005361a3d25278cb66ddf0418eee25de21bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.046310 kubelet[3606]: E0715 04:40:44.045528 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5505344aba4a44a43ebbeefd2090005361a3d25278cb66ddf0418eee25de21bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.046310 kubelet[3606]: E0715 04:40:44.046030 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5505344aba4a44a43ebbeefd2090005361a3d25278cb66ddf0418eee25de21bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jb4bp" Jul 15 04:40:44.046310 kubelet[3606]: E0715 04:40:44.046103 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5505344aba4a44a43ebbeefd2090005361a3d25278cb66ddf0418eee25de21bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jb4bp" Jul 15 04:40:44.046588 kubelet[3606]: E0715 04:40:44.046205 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ddc7c5889-jb4bp_calico-apiserver(8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ddc7c5889-jb4bp_calico-apiserver(8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5505344aba4a44a43ebbeefd2090005361a3d25278cb66ddf0418eee25de21bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jb4bp" podUID="8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b" Jul 15 04:40:44.075882 containerd[2009]: time="2025-07-15T04:40:44.075611839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 04:40:44.176122 containerd[2009]: time="2025-07-15T04:40:44.175563155Z" level=error msg="Failed to destroy network for sandbox \"bfe76a51d0f04cc0fafc67fb4d93170087f871d6b201d892b058cb757e201ce9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.183306 containerd[2009]: time="2025-07-15T04:40:44.183193666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc5d88d78-wgx85,Uid:c2fbd100-bc42-468d-a8e5-9c4e6ab30466,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe76a51d0f04cc0fafc67fb4d93170087f871d6b201d892b058cb757e201ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.186800 kubelet[3606]: E0715 04:40:44.184625 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe76a51d0f04cc0fafc67fb4d93170087f871d6b201d892b058cb757e201ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.186800 kubelet[3606]: E0715 04:40:44.184738 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe76a51d0f04cc0fafc67fb4d93170087f871d6b201d892b058cb757e201ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc5d88d78-wgx85" Jul 15 04:40:44.186800 kubelet[3606]: E0715 04:40:44.184774 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfe76a51d0f04cc0fafc67fb4d93170087f871d6b201d892b058cb757e201ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc5d88d78-wgx85" Jul 15 04:40:44.188925 kubelet[3606]: E0715 04:40:44.184866 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fc5d88d78-wgx85_calico-system(c2fbd100-bc42-468d-a8e5-9c4e6ab30466)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fc5d88d78-wgx85_calico-system(c2fbd100-bc42-468d-a8e5-9c4e6ab30466)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfe76a51d0f04cc0fafc67fb4d93170087f871d6b201d892b058cb757e201ce9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fc5d88d78-wgx85" podUID="c2fbd100-bc42-468d-a8e5-9c4e6ab30466" Jul 15 04:40:44.227344 systemd[1]: run-netns-cni\x2d8428343a\x2d9cb6\x2d8839\x2dc198\x2dbfc6b49a5570.mount: Deactivated successfully. Jul 15 04:40:44.239433 containerd[2009]: time="2025-07-15T04:40:44.239027763Z" level=error msg="Failed to destroy network for sandbox \"e7e41ea39e059a55f92b3f40c1f4e8acb5739fff6c5fd8f9758a4c31d08b5e94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.248030 systemd[1]: run-netns-cni\x2d78838b99\x2d28d6\x2da293\x2dc7f1\x2d0fbffad707e2.mount: Deactivated successfully. Jul 15 04:40:44.251258 containerd[2009]: time="2025-07-15T04:40:44.247900708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9nrk2,Uid:32c70bee-38cd-40a3-a3d2-1c66fbc2b353,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e41ea39e059a55f92b3f40c1f4e8acb5739fff6c5fd8f9758a4c31d08b5e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.251574 kubelet[3606]: E0715 04:40:44.251493 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e41ea39e059a55f92b3f40c1f4e8acb5739fff6c5fd8f9758a4c31d08b5e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.251687 kubelet[3606]: E0715 04:40:44.251587 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e41ea39e059a55f92b3f40c1f4e8acb5739fff6c5fd8f9758a4c31d08b5e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9nrk2" Jul 15 04:40:44.251687 kubelet[3606]: E0715 04:40:44.251620 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7e41ea39e059a55f92b3f40c1f4e8acb5739fff6c5fd8f9758a4c31d08b5e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9nrk2" Jul 15 04:40:44.251811 kubelet[3606]: E0715 04:40:44.251681 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9nrk2_kube-system(32c70bee-38cd-40a3-a3d2-1c66fbc2b353)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9nrk2_kube-system(32c70bee-38cd-40a3-a3d2-1c66fbc2b353)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7e41ea39e059a55f92b3f40c1f4e8acb5739fff6c5fd8f9758a4c31d08b5e94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9nrk2" podUID="32c70bee-38cd-40a3-a3d2-1c66fbc2b353" Jul 15 04:40:44.262360 containerd[2009]: time="2025-07-15T04:40:44.261422672Z" level=error msg="Failed to destroy network for sandbox \"8b9b598ba62f58559afd337313ad187c50be4c5f5640be065efea4a2a4e23184\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.264324 containerd[2009]: time="2025-07-15T04:40:44.263930725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jr9jp,Uid:bb3d31a1-cb17-4c13-8521-539eb31aad4e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9b598ba62f58559afd337313ad187c50be4c5f5640be065efea4a2a4e23184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.265656 kubelet[3606]: E0715 04:40:44.265561 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9b598ba62f58559afd337313ad187c50be4c5f5640be065efea4a2a4e23184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.265656 kubelet[3606]: E0715 04:40:44.265647 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9b598ba62f58559afd337313ad187c50be4c5f5640be065efea4a2a4e23184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jr9jp" Jul 15 04:40:44.265858 kubelet[3606]: E0715 04:40:44.265681 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b9b598ba62f58559afd337313ad187c50be4c5f5640be065efea4a2a4e23184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jr9jp" Jul 15 04:40:44.266399 kubelet[3606]: E0715 04:40:44.265784 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ddc7c5889-jr9jp_calico-apiserver(bb3d31a1-cb17-4c13-8521-539eb31aad4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ddc7c5889-jr9jp_calico-apiserver(bb3d31a1-cb17-4c13-8521-539eb31aad4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b9b598ba62f58559afd337313ad187c50be4c5f5640be065efea4a2a4e23184\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jr9jp" podUID="bb3d31a1-cb17-4c13-8521-539eb31aad4e" Jul 15 04:40:44.267803 systemd[1]: run-netns-cni\x2df0a68382\x2d88f9\x2d2276\x2dacd3\x2d44389b120d2e.mount: Deactivated successfully. Jul 15 04:40:44.275509 containerd[2009]: time="2025-07-15T04:40:44.274651814Z" level=error msg="Failed to destroy network for sandbox \"be95cfe39212270bdd0e523047729b18042561570676b9bbb1eafc54074f8a73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.278655 containerd[2009]: time="2025-07-15T04:40:44.278514482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6lqdq,Uid:eb495a5a-d4bd-4828-9ddf-c81a6f55aa08,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be95cfe39212270bdd0e523047729b18042561570676b9bbb1eafc54074f8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.280663 kubelet[3606]: E0715 04:40:44.280587 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be95cfe39212270bdd0e523047729b18042561570676b9bbb1eafc54074f8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.280797 kubelet[3606]: E0715 04:40:44.280679 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be95cfe39212270bdd0e523047729b18042561570676b9bbb1eafc54074f8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6lqdq" Jul 15 04:40:44.280797 kubelet[3606]: E0715 04:40:44.280748 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be95cfe39212270bdd0e523047729b18042561570676b9bbb1eafc54074f8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6lqdq" Jul 15 04:40:44.281835 systemd[1]: run-netns-cni\x2d6241cd72\x2d9de1\x2dd9c4\x2d294d\x2d8b33d656cb19.mount: Deactivated successfully. Jul 15 04:40:44.285968 kubelet[3606]: E0715 04:40:44.284510 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6lqdq_kube-system(eb495a5a-d4bd-4828-9ddf-c81a6f55aa08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6lqdq_kube-system(eb495a5a-d4bd-4828-9ddf-c81a6f55aa08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be95cfe39212270bdd0e523047729b18042561570676b9bbb1eafc54074f8a73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6lqdq" podUID="eb495a5a-d4bd-4828-9ddf-c81a6f55aa08" Jul 15 04:40:44.287857 containerd[2009]: time="2025-07-15T04:40:44.287762300Z" level=error msg="Failed to destroy network for sandbox \"2d4d684ca3d32b192c57ffcc7017624f6ac453b36c427ddff3cc5fb83661f13e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.289453 containerd[2009]: time="2025-07-15T04:40:44.289361052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hx5c6,Uid:599f0608-2596-480c-b12f-3aa0503a3b0c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4d684ca3d32b192c57ffcc7017624f6ac453b36c427ddff3cc5fb83661f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.289787 kubelet[3606]: E0715 04:40:44.289674 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4d684ca3d32b192c57ffcc7017624f6ac453b36c427ddff3cc5fb83661f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.289787 kubelet[3606]: E0715 04:40:44.289749 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4d684ca3d32b192c57ffcc7017624f6ac453b36c427ddff3cc5fb83661f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-hx5c6" Jul 15 04:40:44.290037 kubelet[3606]: E0715 04:40:44.289781 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4d684ca3d32b192c57ffcc7017624f6ac453b36c427ddff3cc5fb83661f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-hx5c6" Jul 15 04:40:44.290037 kubelet[3606]: E0715 04:40:44.289856 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-hx5c6_calico-system(599f0608-2596-480c-b12f-3aa0503a3b0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-hx5c6_calico-system(599f0608-2596-480c-b12f-3aa0503a3b0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d4d684ca3d32b192c57ffcc7017624f6ac453b36c427ddff3cc5fb83661f13e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-hx5c6" podUID="599f0608-2596-480c-b12f-3aa0503a3b0c" Jul 15 04:40:44.785647 systemd[1]: Created slice kubepods-besteffort-pod2d26261d_e960_406a_9b63_17a87a2b10d4.slice - libcontainer container kubepods-besteffort-pod2d26261d_e960_406a_9b63_17a87a2b10d4.slice. Jul 15 04:40:44.791006 containerd[2009]: time="2025-07-15T04:40:44.790927789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc67v,Uid:2d26261d-e960-406a-9b63-17a87a2b10d4,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:44.881683 containerd[2009]: time="2025-07-15T04:40:44.881601829Z" level=error msg="Failed to destroy network for sandbox \"8c9d6c4d69e0a7b2ba266973973c412e5cc11bbaef8ab3b43eca00846020bc06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.883049 containerd[2009]: time="2025-07-15T04:40:44.882971520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc67v,Uid:2d26261d-e960-406a-9b63-17a87a2b10d4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9d6c4d69e0a7b2ba266973973c412e5cc11bbaef8ab3b43eca00846020bc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.883432 kubelet[3606]: E0715 04:40:44.883375 3606 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9d6c4d69e0a7b2ba266973973c412e5cc11bbaef8ab3b43eca00846020bc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:40:44.883512 kubelet[3606]: E0715 04:40:44.883476 3606 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9d6c4d69e0a7b2ba266973973c412e5cc11bbaef8ab3b43eca00846020bc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc67v" Jul 15 04:40:44.883597 kubelet[3606]: E0715 04:40:44.883536 3606 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9d6c4d69e0a7b2ba266973973c412e5cc11bbaef8ab3b43eca00846020bc06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc67v" Jul 15 04:40:44.883806 kubelet[3606]: E0715 04:40:44.883733 3606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gc67v_calico-system(2d26261d-e960-406a-9b63-17a87a2b10d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gc67v_calico-system(2d26261d-e960-406a-9b63-17a87a2b10d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c9d6c4d69e0a7b2ba266973973c412e5cc11bbaef8ab3b43eca00846020bc06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gc67v" podUID="2d26261d-e960-406a-9b63-17a87a2b10d4" Jul 15 04:40:45.179281 systemd[1]: run-netns-cni\x2d496d0954\x2d33be\x2d5442\x2dc9e3\x2d0f0c18408928.mount: Deactivated successfully. Jul 15 04:40:50.364718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780014064.mount: Deactivated successfully. Jul 15 04:40:50.429817 containerd[2009]: time="2025-07-15T04:40:50.429747841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:50.431310 containerd[2009]: time="2025-07-15T04:40:50.431228308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 04:40:50.432245 containerd[2009]: time="2025-07-15T04:40:50.432151031Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:50.435220 containerd[2009]: time="2025-07-15T04:40:50.435139312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:50.437670 containerd[2009]: time="2025-07-15T04:40:50.437585728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.361900965s" Jul 15 04:40:50.437670 containerd[2009]: time="2025-07-15T04:40:50.437653950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 04:40:50.467115 containerd[2009]: time="2025-07-15T04:40:50.467048209Z" level=info msg="CreateContainer within sandbox \"1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 04:40:50.495026 containerd[2009]: time="2025-07-15T04:40:50.494958667Z" level=info msg="Container 919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:50.503798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601533281.mount: Deactivated successfully. Jul 15 04:40:50.527074 containerd[2009]: time="2025-07-15T04:40:50.526988249Z" level=info msg="CreateContainer within sandbox \"1b088c04e71475a9faa7c20b51a99b7081c963a90c61943023ef1635660fe20f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\"" Jul 15 04:40:50.528815 containerd[2009]: time="2025-07-15T04:40:50.528707085Z" level=info msg="StartContainer for \"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\"" Jul 15 04:40:50.532585 containerd[2009]: time="2025-07-15T04:40:50.532520685Z" level=info msg="connecting to shim 919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef" address="unix:///run/containerd/s/2c04a19ec7a19b49c99a48df2103f0f9d9d45674886428ce73220e365e42392a" protocol=ttrpc version=3 Jul 15 04:40:50.567735 systemd[1]: Started cri-containerd-919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef.scope - libcontainer container 919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef. Jul 15 04:40:50.670973 containerd[2009]: time="2025-07-15T04:40:50.670606936Z" level=info msg="StartContainer for \"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" returns successfully" Jul 15 04:40:50.922180 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 04:40:50.922871 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 04:40:51.195750 kubelet[3606]: I0715 04:40:51.195585 3606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnrz4\" (UniqueName: \"kubernetes.io/projected/ee258c44-6978-4e37-8cd2-4a604aaa3124-kube-api-access-lnrz4\") pod \"ee258c44-6978-4e37-8cd2-4a604aaa3124\" (UID: \"ee258c44-6978-4e37-8cd2-4a604aaa3124\") " Jul 15 04:40:51.195750 kubelet[3606]: I0715 04:40:51.195667 3606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-backend-key-pair\") pod \"ee258c44-6978-4e37-8cd2-4a604aaa3124\" (UID: \"ee258c44-6978-4e37-8cd2-4a604aaa3124\") " Jul 15 04:40:51.195750 kubelet[3606]: I0715 04:40:51.195714 3606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-ca-bundle\") pod \"ee258c44-6978-4e37-8cd2-4a604aaa3124\" (UID: \"ee258c44-6978-4e37-8cd2-4a604aaa3124\") " Jul 15 04:40:51.205773 kubelet[3606]: I0715 04:40:51.205680 3606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ee258c44-6978-4e37-8cd2-4a604aaa3124" (UID: "ee258c44-6978-4e37-8cd2-4a604aaa3124"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 04:40:51.216830 kubelet[3606]: I0715 04:40:51.216751 3606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee258c44-6978-4e37-8cd2-4a604aaa3124-kube-api-access-lnrz4" (OuterVolumeSpecName: "kube-api-access-lnrz4") pod "ee258c44-6978-4e37-8cd2-4a604aaa3124" (UID: "ee258c44-6978-4e37-8cd2-4a604aaa3124"). InnerVolumeSpecName "kube-api-access-lnrz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:40:51.217144 kubelet[3606]: I0715 04:40:51.217112 3606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ee258c44-6978-4e37-8cd2-4a604aaa3124" (UID: "ee258c44-6978-4e37-8cd2-4a604aaa3124"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 04:40:51.296845 kubelet[3606]: I0715 04:40:51.296788 3606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnrz4\" (UniqueName: \"kubernetes.io/projected/ee258c44-6978-4e37-8cd2-4a604aaa3124-kube-api-access-lnrz4\") on node \"ip-172-31-22-130\" DevicePath \"\"" Jul 15 04:40:51.296845 kubelet[3606]: I0715 04:40:51.296841 3606 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-backend-key-pair\") on node \"ip-172-31-22-130\" DevicePath \"\"" Jul 15 04:40:51.297070 kubelet[3606]: I0715 04:40:51.296868 3606 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee258c44-6978-4e37-8cd2-4a604aaa3124-whisker-ca-bundle\") on node \"ip-172-31-22-130\" DevicePath \"\"" Jul 15 04:40:51.368592 systemd[1]: var-lib-kubelet-pods-ee258c44\x2d6978\x2d4e37\x2d8cd2\x2d4a604aaa3124-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlnrz4.mount: Deactivated successfully. Jul 15 04:40:51.368781 systemd[1]: var-lib-kubelet-pods-ee258c44\x2d6978\x2d4e37\x2d8cd2\x2d4a604aaa3124-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 04:40:51.449247 containerd[2009]: time="2025-07-15T04:40:51.449005568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" id:\"360781f2b1e00a60053273769a497e3f457f2969a411e1dd6f22ca4dbed96356\" pid:4694 exit_status:1 exited_at:{seconds:1752554451 nanos:448204957}" Jul 15 04:40:51.789915 systemd[1]: Removed slice kubepods-besteffort-podee258c44_6978_4e37_8cd2_4a604aaa3124.slice - libcontainer container kubepods-besteffort-podee258c44_6978_4e37_8cd2_4a604aaa3124.slice. Jul 15 04:40:52.154416 kubelet[3606]: I0715 04:40:52.152719 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nk99r" podStartSLOduration=2.6781903099999997 podStartE2EDuration="19.152688981s" podCreationTimestamp="2025-07-15 04:40:33 +0000 UTC" firstStartedPulling="2025-07-15 04:40:33.964846889 +0000 UTC m=+26.437037752" lastFinishedPulling="2025-07-15 04:40:50.439345488 +0000 UTC m=+42.911536423" observedRunningTime="2025-07-15 04:40:51.217770267 +0000 UTC m=+43.689961142" watchObservedRunningTime="2025-07-15 04:40:52.152688981 +0000 UTC m=+44.624879831" Jul 15 04:40:52.267353 systemd[1]: Created slice kubepods-besteffort-pod18f53521_0348_4c00_91e3_e1c5754cf5e9.slice - libcontainer container kubepods-besteffort-pod18f53521_0348_4c00_91e3_e1c5754cf5e9.slice. Jul 15 04:40:52.306688 kubelet[3606]: I0715 04:40:52.306612 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f53521-0348-4c00-91e3-e1c5754cf5e9-whisker-ca-bundle\") pod \"whisker-56678d84f4-nwpf4\" (UID: \"18f53521-0348-4c00-91e3-e1c5754cf5e9\") " pod="calico-system/whisker-56678d84f4-nwpf4" Jul 15 04:40:52.307930 kubelet[3606]: I0715 04:40:52.306712 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18f53521-0348-4c00-91e3-e1c5754cf5e9-whisker-backend-key-pair\") pod \"whisker-56678d84f4-nwpf4\" (UID: \"18f53521-0348-4c00-91e3-e1c5754cf5e9\") " pod="calico-system/whisker-56678d84f4-nwpf4" Jul 15 04:40:52.307930 kubelet[3606]: I0715 04:40:52.306757 3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9sw\" (UniqueName: \"kubernetes.io/projected/18f53521-0348-4c00-91e3-e1c5754cf5e9-kube-api-access-pm9sw\") pod \"whisker-56678d84f4-nwpf4\" (UID: \"18f53521-0348-4c00-91e3-e1c5754cf5e9\") " pod="calico-system/whisker-56678d84f4-nwpf4" Jul 15 04:40:52.529190 containerd[2009]: time="2025-07-15T04:40:52.529044666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" id:\"31d6a0a2e1241a4dd25d3e003741bbd2bb1e13e71b71f04165ff2a9c062adacf\" pid:4738 exit_status:1 exited_at:{seconds:1752554452 nanos:528543125}" Jul 15 04:40:52.577451 containerd[2009]: time="2025-07-15T04:40:52.577378327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56678d84f4-nwpf4,Uid:18f53521-0348-4c00-91e3-e1c5754cf5e9,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:53.008832 systemd-networkd[1869]: calid54dbee5a8f: Link UP Jul 15 04:40:53.009269 (udev-worker)[4676]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:40:53.010001 systemd-networkd[1869]: calid54dbee5a8f: Gained carrier Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.627 [INFO][4752] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.727 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0 whisker-56678d84f4- calico-system 18f53521-0348-4c00-91e3-e1c5754cf5e9 918 0 2025-07-15 04:40:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56678d84f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-130 whisker-56678d84f4-nwpf4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid54dbee5a8f [] [] }} ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.727 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.898 [INFO][4799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" HandleID="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Workload="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.899 [INFO][4799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" HandleID="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Workload="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002afdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-130", "pod":"whisker-56678d84f4-nwpf4", "timestamp":"2025-07-15 04:40:52.898839069 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.899 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.901 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.901 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.919 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.932 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.943 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.946 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.954 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.954 [INFO][4799] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.958 [INFO][4799] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680 Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.965 [INFO][4799] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.978 [INFO][4799] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.193/26] block=192.168.45.192/26 handle="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.978 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.193/26] handle="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" host="ip-172-31-22-130" Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.978 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:53.065563 containerd[2009]: 2025-07-15 04:40:52.978 [INFO][4799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.193/26] IPv6=[] ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" HandleID="k8s-pod-network.719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Workload="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" Jul 15 04:40:53.067265 containerd[2009]: 2025-07-15 04:40:52.987 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0", GenerateName:"whisker-56678d84f4-", Namespace:"calico-system", SelfLink:"", UID:"18f53521-0348-4c00-91e3-e1c5754cf5e9", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56678d84f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"whisker-56678d84f4-nwpf4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid54dbee5a8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:53.067265 containerd[2009]: 2025-07-15 04:40:52.987 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.193/32] ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" Jul 15 04:40:53.067265 containerd[2009]: 2025-07-15 04:40:52.987 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid54dbee5a8f ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" Jul 15 04:40:53.067265 containerd[2009]: 2025-07-15 04:40:53.016 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" Jul 15 04:40:53.067265 containerd[2009]: 2025-07-15 04:40:53.017 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0", GenerateName:"whisker-56678d84f4-", Namespace:"calico-system", SelfLink:"", UID:"18f53521-0348-4c00-91e3-e1c5754cf5e9", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56678d84f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680", Pod:"whisker-56678d84f4-nwpf4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid54dbee5a8f", MAC:"e2:97:2a:b2:70:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:53.067265 containerd[2009]: 2025-07-15 04:40:53.046 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" Namespace="calico-system" Pod="whisker-56678d84f4-nwpf4" WorkloadEndpoint="ip--172--31--22--130-k8s-whisker--56678d84f4--nwpf4-eth0" Jul 15 04:40:53.155635 containerd[2009]: time="2025-07-15T04:40:53.155474856Z" level=info msg="connecting to shim 719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680" address="unix:///run/containerd/s/cdf454d6d7c16a39b2f1d34baf4a89f8d3a0240a27d06c1ab538d5cbc78fab86" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:53.250699 systemd[1]: Started cri-containerd-719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680.scope - libcontainer container 719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680. Jul 15 04:40:53.407996 containerd[2009]: time="2025-07-15T04:40:53.407851201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56678d84f4-nwpf4,Uid:18f53521-0348-4c00-91e3-e1c5754cf5e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680\"" Jul 15 04:40:53.414092 containerd[2009]: time="2025-07-15T04:40:53.414028867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 04:40:53.785096 kubelet[3606]: I0715 04:40:53.784953 3606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee258c44-6978-4e37-8cd2-4a604aaa3124" path="/var/lib/kubelet/pods/ee258c44-6978-4e37-8cd2-4a604aaa3124/volumes" Jul 15 04:40:54.586856 systemd-networkd[1869]: calid54dbee5a8f: Gained IPv6LL Jul 15 04:40:54.690121 containerd[2009]: time="2025-07-15T04:40:54.690053283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:54.694754 containerd[2009]: time="2025-07-15T04:40:54.694662625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 04:40:54.696040 containerd[2009]: time="2025-07-15T04:40:54.695960940Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:54.703516 containerd[2009]: time="2025-07-15T04:40:54.703430491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:54.706271 containerd[2009]: time="2025-07-15T04:40:54.706191426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.292097887s" Jul 15 04:40:54.706618 containerd[2009]: time="2025-07-15T04:40:54.706325219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 04:40:54.714532 containerd[2009]: time="2025-07-15T04:40:54.714462561Z" level=info msg="CreateContainer within sandbox \"719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 04:40:54.730140 containerd[2009]: time="2025-07-15T04:40:54.729776848Z" level=info msg="Container db946e5aa904f4d1db4bb73bc0b9fe2150e103b343ff4909e7b1790baa6c7c25: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:54.758020 containerd[2009]: time="2025-07-15T04:40:54.757850580Z" level=info msg="CreateContainer within sandbox \"719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"db946e5aa904f4d1db4bb73bc0b9fe2150e103b343ff4909e7b1790baa6c7c25\"" Jul 15 04:40:54.763731 containerd[2009]: time="2025-07-15T04:40:54.760223317Z" level=info msg="StartContainer for \"db946e5aa904f4d1db4bb73bc0b9fe2150e103b343ff4909e7b1790baa6c7c25\"" Jul 15 04:40:54.766918 containerd[2009]: time="2025-07-15T04:40:54.766842050Z" level=info msg="connecting to shim db946e5aa904f4d1db4bb73bc0b9fe2150e103b343ff4909e7b1790baa6c7c25" address="unix:///run/containerd/s/cdf454d6d7c16a39b2f1d34baf4a89f8d3a0240a27d06c1ab538d5cbc78fab86" protocol=ttrpc version=3 Jul 15 04:40:54.786325 containerd[2009]: time="2025-07-15T04:40:54.785762897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jb4bp,Uid:8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:54.844671 systemd[1]: Started cri-containerd-db946e5aa904f4d1db4bb73bc0b9fe2150e103b343ff4909e7b1790baa6c7c25.scope - libcontainer container db946e5aa904f4d1db4bb73bc0b9fe2150e103b343ff4909e7b1790baa6c7c25. Jul 15 04:40:54.957889 containerd[2009]: time="2025-07-15T04:40:54.957831148Z" level=info msg="StartContainer for \"db946e5aa904f4d1db4bb73bc0b9fe2150e103b343ff4909e7b1790baa6c7c25\" returns successfully" Jul 15 04:40:54.967266 containerd[2009]: time="2025-07-15T04:40:54.966593461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 04:40:55.084476 systemd-networkd[1869]: cali412b7d016cf: Link UP Jul 15 04:40:55.086338 systemd-networkd[1869]: cali412b7d016cf: Gained carrier Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.875 [INFO][4952] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.899 [INFO][4952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0 calico-apiserver-6ddc7c5889- calico-apiserver 8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b 840 0 2025-07-15 04:40:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ddc7c5889 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-130 calico-apiserver-6ddc7c5889-jb4bp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali412b7d016cf [] [] }} ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.899 [INFO][4952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.995 [INFO][4977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" HandleID="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Workload="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.995 [INFO][4977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" HandleID="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Workload="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-130", "pod":"calico-apiserver-6ddc7c5889-jb4bp", "timestamp":"2025-07-15 04:40:54.99540504 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.996 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.996 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:54.996 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.021 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.029 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.044 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.050 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.054 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.054 [INFO][4977] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.056 [INFO][4977] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.063 [INFO][4977] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.075 [INFO][4977] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.194/26] block=192.168.45.192/26 handle="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.075 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.194/26] handle="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" host="ip-172-31-22-130" Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.075 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:55.114486 containerd[2009]: 2025-07-15 04:40:55.076 [INFO][4977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.194/26] IPv6=[] ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" HandleID="k8s-pod-network.89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Workload="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" Jul 15 04:40:55.115600 containerd[2009]: 2025-07-15 04:40:55.079 [INFO][4952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0", GenerateName:"calico-apiserver-6ddc7c5889-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ddc7c5889", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"calico-apiserver-6ddc7c5889-jb4bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali412b7d016cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:55.115600 containerd[2009]: 2025-07-15 04:40:55.079 [INFO][4952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.194/32] ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" Jul 15 04:40:55.115600 containerd[2009]: 2025-07-15 04:40:55.079 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali412b7d016cf ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" Jul 15 04:40:55.115600 containerd[2009]: 2025-07-15 04:40:55.087 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" Jul 15 04:40:55.115600 containerd[2009]: 2025-07-15 04:40:55.088 [INFO][4952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0", GenerateName:"calico-apiserver-6ddc7c5889-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ddc7c5889", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa", Pod:"calico-apiserver-6ddc7c5889-jb4bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali412b7d016cf", MAC:"b6:9d:b2:3f:db:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:55.115600 containerd[2009]: 2025-07-15 04:40:55.108 [INFO][4952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jb4bp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jb4bp-eth0" Jul 15 04:40:55.153047 containerd[2009]: time="2025-07-15T04:40:55.152897835Z" level=info msg="connecting to shim 89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa" address="unix:///run/containerd/s/cb08d1e2c71643a00fe8fb8367469bf182b1c6f6102c5e1f4901a4288b262707" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:55.198581 systemd[1]: Started cri-containerd-89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa.scope - libcontainer container 89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa. Jul 15 04:40:55.270040 containerd[2009]: time="2025-07-15T04:40:55.269892324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jb4bp,Uid:8fb07b0b-58f7-4bdc-8a7f-62087e15fe4b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa\"" Jul 15 04:40:55.777344 containerd[2009]: time="2025-07-15T04:40:55.776903869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9nrk2,Uid:32c70bee-38cd-40a3-a3d2-1c66fbc2b353,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:55.779798 containerd[2009]: time="2025-07-15T04:40:55.777211120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6lqdq,Uid:eb495a5a-d4bd-4828-9ddf-c81a6f55aa08,Namespace:kube-system,Attempt:0,}" Jul 15 04:40:55.779798 containerd[2009]: time="2025-07-15T04:40:55.777278514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hx5c6,Uid:599f0608-2596-480c-b12f-3aa0503a3b0c,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:55.934369 kubelet[3606]: I0715 04:40:55.934107 3606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:40:56.502083 systemd-networkd[1869]: cali17e15bcc115: Link UP Jul 15 04:40:56.507141 systemd-networkd[1869]: cali17e15bcc115: Gained carrier Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:55.968 [INFO][5060] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.123 [INFO][5060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0 coredns-7c65d6cfc9- kube-system 32c70bee-38cd-40a3-a3d2-1c66fbc2b353 839 0 2025-07-15 04:40:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-130 coredns-7c65d6cfc9-9nrk2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17e15bcc115 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.124 [INFO][5060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.325 [INFO][5105] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" HandleID="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Workload="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.325 [INFO][5105] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" HandleID="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Workload="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034d060), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-130", "pod":"coredns-7c65d6cfc9-9nrk2", "timestamp":"2025-07-15 04:40:56.325388423 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.326 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.326 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.326 [INFO][5105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.365 [INFO][5105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.380 [INFO][5105] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.401 [INFO][5105] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.409 [INFO][5105] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.421 [INFO][5105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.422 [INFO][5105] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.428 [INFO][5105] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.444 [INFO][5105] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.468 [INFO][5105] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.195/26] block=192.168.45.192/26 handle="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.468 [INFO][5105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.195/26] handle="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" host="ip-172-31-22-130" Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.470 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:56.564323 containerd[2009]: 2025-07-15 04:40:56.470 [INFO][5105] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.195/26] IPv6=[] ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" HandleID="k8s-pod-network.1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Workload="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" Jul 15 04:40:56.566698 containerd[2009]: 2025-07-15 04:40:56.489 [INFO][5060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"32c70bee-38cd-40a3-a3d2-1c66fbc2b353", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"coredns-7c65d6cfc9-9nrk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e15bcc115", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:56.566698 containerd[2009]: 2025-07-15 04:40:56.489 [INFO][5060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.195/32] ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" Jul 15 04:40:56.566698 containerd[2009]: 2025-07-15 04:40:56.489 [INFO][5060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17e15bcc115 ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" Jul 15 04:40:56.566698 containerd[2009]: 2025-07-15 04:40:56.503 [INFO][5060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" Jul 15 04:40:56.566698 containerd[2009]: 2025-07-15 04:40:56.504 [INFO][5060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"32c70bee-38cd-40a3-a3d2-1c66fbc2b353", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a", Pod:"coredns-7c65d6cfc9-9nrk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e15bcc115", MAC:"96:13:be:9d:ee:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:56.567769 containerd[2009]: 2025-07-15 04:40:56.547 [INFO][5060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9nrk2" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--9nrk2-eth0" Jul 15 04:40:56.635255 containerd[2009]: time="2025-07-15T04:40:56.635063993Z" level=info msg="connecting to shim 1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a" address="unix:///run/containerd/s/5bdf19c7abb4f9a8fcdbfad69c1dcfb1450c60f7bf578a2e754cb98520de2cf3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:56.770052 systemd-networkd[1869]: calid0bc130e7c1: Link UP Jul 15 04:40:56.774602 systemd-networkd[1869]: calid0bc130e7c1: Gained carrier Jul 15 04:40:56.777736 systemd[1]: Started cri-containerd-1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a.scope - libcontainer container 1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a. Jul 15 04:40:56.783851 containerd[2009]: time="2025-07-15T04:40:56.782951225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc5d88d78-wgx85,Uid:c2fbd100-bc42-468d-a8e5-9c4e6ab30466,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:56.788802 containerd[2009]: time="2025-07-15T04:40:56.788453592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jr9jp,Uid:bb3d31a1-cb17-4c13-8521-539eb31aad4e,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.010 [INFO][5065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.115 [INFO][5065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0 coredns-7c65d6cfc9- kube-system eb495a5a-d4bd-4828-9ddf-c81a6f55aa08 844 0 2025-07-15 04:40:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-130 coredns-7c65d6cfc9-6lqdq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid0bc130e7c1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.115 [INFO][5065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.330 [INFO][5106] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" HandleID="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Workload="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.331 [INFO][5106] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" HandleID="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Workload="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103830), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-130", "pod":"coredns-7c65d6cfc9-6lqdq", "timestamp":"2025-07-15 04:40:56.330063697 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.331 [INFO][5106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.470 [INFO][5106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.470 [INFO][5106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.527 [INFO][5106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.584 [INFO][5106] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.605 [INFO][5106] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.611 [INFO][5106] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.619 [INFO][5106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.619 [INFO][5106] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.624 [INFO][5106] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.650 [INFO][5106] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.674 [INFO][5106] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.196/26] block=192.168.45.192/26 handle="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.675 [INFO][5106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.196/26] handle="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" host="ip-172-31-22-130" Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.675 [INFO][5106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:56.877305 containerd[2009]: 2025-07-15 04:40:56.677 [INFO][5106] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.196/26] IPv6=[] ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" HandleID="k8s-pod-network.9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Workload="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" Jul 15 04:40:56.879465 containerd[2009]: 2025-07-15 04:40:56.687 [INFO][5065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb495a5a-d4bd-4828-9ddf-c81a6f55aa08", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"coredns-7c65d6cfc9-6lqdq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0bc130e7c1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:56.879465 containerd[2009]: 2025-07-15 04:40:56.688 [INFO][5065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.196/32] ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" Jul 15 04:40:56.879465 containerd[2009]: 2025-07-15 04:40:56.688 [INFO][5065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0bc130e7c1 ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" Jul 15 04:40:56.879465 containerd[2009]: 2025-07-15 04:40:56.782 [INFO][5065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" Jul 15 04:40:56.879465 containerd[2009]: 2025-07-15 04:40:56.786 [INFO][5065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb495a5a-d4bd-4828-9ddf-c81a6f55aa08", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a", Pod:"coredns-7c65d6cfc9-6lqdq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0bc130e7c1", MAC:"ae:cb:1a:61:c8:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:56.879867 containerd[2009]: 2025-07-15 04:40:56.854 [INFO][5065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6lqdq" WorkloadEndpoint="ip--172--31--22--130-k8s-coredns--7c65d6cfc9--6lqdq-eth0" Jul 15 04:40:57.053505 containerd[2009]: time="2025-07-15T04:40:57.051502763Z" level=info msg="connecting to shim 9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a" address="unix:///run/containerd/s/265a76a0a8d36714f795fc16310a95fed7bbd24479da89387eec718fb3e44396" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:57.080871 systemd-networkd[1869]: cali2c95a7148e3: Link UP Jul 15 04:40:57.081268 systemd-networkd[1869]: cali2c95a7148e3: Gained carrier Jul 15 04:40:57.088651 systemd-networkd[1869]: cali412b7d016cf: Gained IPv6LL Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.058 [INFO][5073] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.146 [INFO][5073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0 goldmane-58fd7646b9- calico-system 599f0608-2596-480c-b12f-3aa0503a3b0c 841 0 2025-07-15 04:40:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-130 goldmane-58fd7646b9-hx5c6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2c95a7148e3 [] [] }} ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.146 [INFO][5073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.363 [INFO][5114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" HandleID="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Workload="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.363 [INFO][5114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" HandleID="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Workload="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400061f2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-130", "pod":"goldmane-58fd7646b9-hx5c6", "timestamp":"2025-07-15 04:40:56.363742681 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.364 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.676 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.677 [INFO][5114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.718 [INFO][5114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.747 [INFO][5114] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.827 [INFO][5114] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.846 [INFO][5114] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.885 [INFO][5114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.885 [INFO][5114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.898 [INFO][5114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480 Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.920 [INFO][5114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.983 [INFO][5114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.197/26] block=192.168.45.192/26 handle="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.986 [INFO][5114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.197/26] handle="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" host="ip-172-31-22-130" Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.987 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:57.182564 containerd[2009]: 2025-07-15 04:40:56.988 [INFO][5114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.197/26] IPv6=[] ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" HandleID="k8s-pod-network.2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Workload="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" Jul 15 04:40:57.184718 containerd[2009]: 2025-07-15 04:40:57.034 [INFO][5073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"599f0608-2596-480c-b12f-3aa0503a3b0c", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"goldmane-58fd7646b9-hx5c6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2c95a7148e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:57.184718 containerd[2009]: 2025-07-15 04:40:57.038 [INFO][5073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.197/32] ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" Jul 15 04:40:57.184718 containerd[2009]: 2025-07-15 04:40:57.038 [INFO][5073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c95a7148e3 ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" Jul 15 04:40:57.184718 containerd[2009]: 2025-07-15 04:40:57.080 [INFO][5073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" Jul 15 04:40:57.184718 containerd[2009]: 2025-07-15 04:40:57.091 [INFO][5073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"599f0608-2596-480c-b12f-3aa0503a3b0c", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480", Pod:"goldmane-58fd7646b9-hx5c6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2c95a7148e3", MAC:"0e:c8:d1:69:2d:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:57.184718 containerd[2009]: 2025-07-15 04:40:57.152 [INFO][5073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" Namespace="calico-system" Pod="goldmane-58fd7646b9-hx5c6" WorkloadEndpoint="ip--172--31--22--130-k8s-goldmane--58fd7646b9--hx5c6-eth0" Jul 15 04:40:57.209727 systemd[1]: Started cri-containerd-9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a.scope - libcontainer container 9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a. Jul 15 04:40:57.272324 containerd[2009]: time="2025-07-15T04:40:57.272239980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9nrk2,Uid:32c70bee-38cd-40a3-a3d2-1c66fbc2b353,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a\"" Jul 15 04:40:57.290095 containerd[2009]: time="2025-07-15T04:40:57.288103400Z" level=info msg="CreateContainer within sandbox \"1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:40:57.390394 containerd[2009]: time="2025-07-15T04:40:57.388585930Z" level=info msg="connecting to shim 2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480" address="unix:///run/containerd/s/745319dc3572e0054af176f2aeeef6631aea4a8c5d4508d0072a5114fd963d96" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:57.393644 containerd[2009]: time="2025-07-15T04:40:57.393577245Z" level=info msg="Container 344adf74927bfc9269fbaad440f5761ddcfdd8ebba4dedbcefe5986346186367: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:57.465760 containerd[2009]: time="2025-07-15T04:40:57.465672016Z" level=info msg="CreateContainer within sandbox \"1d6fb1002936169c95cd2831e467c118922bb3561b61423bcf69f90114d5ea5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"344adf74927bfc9269fbaad440f5761ddcfdd8ebba4dedbcefe5986346186367\"" Jul 15 04:40:57.475724 containerd[2009]: time="2025-07-15T04:40:57.475022142Z" level=info msg="StartContainer for \"344adf74927bfc9269fbaad440f5761ddcfdd8ebba4dedbcefe5986346186367\"" Jul 15 04:40:57.496922 containerd[2009]: time="2025-07-15T04:40:57.496841411Z" level=info msg="connecting to shim 344adf74927bfc9269fbaad440f5761ddcfdd8ebba4dedbcefe5986346186367" address="unix:///run/containerd/s/5bdf19c7abb4f9a8fcdbfad69c1dcfb1450c60f7bf578a2e754cb98520de2cf3" protocol=ttrpc version=3 Jul 15 04:40:57.599976 containerd[2009]: time="2025-07-15T04:40:57.599916253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6lqdq,Uid:eb495a5a-d4bd-4828-9ddf-c81a6f55aa08,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a\"" Jul 15 04:40:57.632779 containerd[2009]: time="2025-07-15T04:40:57.632707777Z" level=info msg="CreateContainer within sandbox \"9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:40:57.636716 systemd[1]: Started cri-containerd-2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480.scope - libcontainer container 2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480. Jul 15 04:40:57.695025 systemd[1]: Started cri-containerd-344adf74927bfc9269fbaad440f5761ddcfdd8ebba4dedbcefe5986346186367.scope - libcontainer container 344adf74927bfc9269fbaad440f5761ddcfdd8ebba4dedbcefe5986346186367. Jul 15 04:40:57.709441 containerd[2009]: time="2025-07-15T04:40:57.708076490Z" level=info msg="Container 9c5aee65d5c4472f728a399b935e91e8c148b156c003526a3e8303da69c56977: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:57.786729 systemd-networkd[1869]: cali17e15bcc115: Gained IPv6LL Jul 15 04:40:57.839117 containerd[2009]: time="2025-07-15T04:40:57.838644409Z" level=info msg="CreateContainer within sandbox \"9d7ace3cc3436e4c06f7fa269240e4c0d67c67387f22fda075a34d835171722a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c5aee65d5c4472f728a399b935e91e8c148b156c003526a3e8303da69c56977\"" Jul 15 04:40:57.842074 containerd[2009]: time="2025-07-15T04:40:57.841924265Z" level=info msg="StartContainer for \"9c5aee65d5c4472f728a399b935e91e8c148b156c003526a3e8303da69c56977\"" Jul 15 04:40:57.851040 containerd[2009]: time="2025-07-15T04:40:57.850940610Z" level=info msg="connecting to shim 9c5aee65d5c4472f728a399b935e91e8c148b156c003526a3e8303da69c56977" address="unix:///run/containerd/s/265a76a0a8d36714f795fc16310a95fed7bbd24479da89387eec718fb3e44396" protocol=ttrpc version=3 Jul 15 04:40:57.913733 systemd-networkd[1869]: calid0bc130e7c1: Gained IPv6LL Jul 15 04:40:57.984919 systemd[1]: Started cri-containerd-9c5aee65d5c4472f728a399b935e91e8c148b156c003526a3e8303da69c56977.scope - libcontainer container 9c5aee65d5c4472f728a399b935e91e8c148b156c003526a3e8303da69c56977. Jul 15 04:40:58.030318 containerd[2009]: time="2025-07-15T04:40:58.030208908Z" level=info msg="StartContainer for \"344adf74927bfc9269fbaad440f5761ddcfdd8ebba4dedbcefe5986346186367\" returns successfully" Jul 15 04:40:58.080881 systemd-networkd[1869]: cali4b0db169ecf: Link UP Jul 15 04:40:58.121133 systemd-networkd[1869]: cali4b0db169ecf: Gained carrier Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.094 [INFO][5176] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.258 [INFO][5176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0 calico-apiserver-6ddc7c5889- calico-apiserver bb3d31a1-cb17-4c13-8521-539eb31aad4e 842 0 2025-07-15 04:40:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ddc7c5889 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-130 calico-apiserver-6ddc7c5889-jr9jp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b0db169ecf [] [] }} ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.262 [INFO][5176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.692 [INFO][5273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" HandleID="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Workload="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.692 [INFO][5273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" HandleID="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Workload="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000183180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-130", "pod":"calico-apiserver-6ddc7c5889-jr9jp", "timestamp":"2025-07-15 04:40:57.692457267 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.692 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.692 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.692 [INFO][5273] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.797 [INFO][5273] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.834 [INFO][5273] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.892 [INFO][5273] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.906 [INFO][5273] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.924 [INFO][5273] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.924 [INFO][5273] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:57.934 [INFO][5273] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6 Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:58.001 [INFO][5273] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:58.037 [INFO][5273] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.198/26] block=192.168.45.192/26 handle="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:58.038 [INFO][5273] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.198/26] handle="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" host="ip-172-31-22-130" Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:58.038 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:58.214678 containerd[2009]: 2025-07-15 04:40:58.039 [INFO][5273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.198/26] IPv6=[] ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" HandleID="k8s-pod-network.b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Workload="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" Jul 15 04:40:58.216563 containerd[2009]: 2025-07-15 04:40:58.055 [INFO][5176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0", GenerateName:"calico-apiserver-6ddc7c5889-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb3d31a1-cb17-4c13-8521-539eb31aad4e", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ddc7c5889", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"calico-apiserver-6ddc7c5889-jr9jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b0db169ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:58.216563 containerd[2009]: 2025-07-15 04:40:58.055 [INFO][5176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.198/32] ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" Jul 15 04:40:58.216563 containerd[2009]: 2025-07-15 04:40:58.055 [INFO][5176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b0db169ecf ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" Jul 15 04:40:58.216563 containerd[2009]: 2025-07-15 04:40:58.131 [INFO][5176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" Jul 15 04:40:58.216563 containerd[2009]: 2025-07-15 04:40:58.132 [INFO][5176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0", GenerateName:"calico-apiserver-6ddc7c5889-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb3d31a1-cb17-4c13-8521-539eb31aad4e", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ddc7c5889", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6", Pod:"calico-apiserver-6ddc7c5889-jr9jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b0db169ecf", MAC:"ba:bb:3f:bd:de:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:58.216563 containerd[2009]: 2025-07-15 04:40:58.194 [INFO][5176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" Namespace="calico-apiserver" Pod="calico-apiserver-6ddc7c5889-jr9jp" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--apiserver--6ddc7c5889--jr9jp-eth0" Jul 15 04:40:58.233622 systemd-networkd[1869]: cali2c95a7148e3: Gained IPv6LL Jul 15 04:40:58.300238 kubelet[3606]: I0715 04:40:58.300060 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9nrk2" podStartSLOduration=45.300032445 podStartE2EDuration="45.300032445s" podCreationTimestamp="2025-07-15 04:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:58.292756753 +0000 UTC m=+50.764947652" watchObservedRunningTime="2025-07-15 04:40:58.300032445 +0000 UTC m=+50.772223884" Jul 15 04:40:58.363848 containerd[2009]: time="2025-07-15T04:40:58.363610109Z" level=info msg="StartContainer for \"9c5aee65d5c4472f728a399b935e91e8c148b156c003526a3e8303da69c56977\" returns successfully" Jul 15 04:40:58.407032 systemd-networkd[1869]: cali605657c644c: Link UP Jul 15 04:40:58.411825 systemd-networkd[1869]: cali605657c644c: Gained carrier Jul 15 04:40:58.455655 containerd[2009]: time="2025-07-15T04:40:58.455181397Z" level=info msg="connecting to shim b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6" address="unix:///run/containerd/s/2e3291955fd48c82593e5dbaa4fef7ca1df00efec16fb9a95a0675402cfbc806" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:58.546561 containerd[2009]: time="2025-07-15T04:40:58.546312325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hx5c6,Uid:599f0608-2596-480c-b12f-3aa0503a3b0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480\"" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:57.271 [INFO][5179] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:57.380 [INFO][5179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0 calico-kube-controllers-5fc5d88d78- calico-system c2fbd100-bc42-468d-a8e5-9c4e6ab30466 845 0 2025-07-15 04:40:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fc5d88d78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-130 calico-kube-controllers-5fc5d88d78-wgx85 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali605657c644c [] [] }} ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:57.394 [INFO][5179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:57.847 [INFO][5298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" HandleID="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Workload="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:57.848 [INFO][5298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" HandleID="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Workload="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003186c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-130", "pod":"calico-kube-controllers-5fc5d88d78-wgx85", "timestamp":"2025-07-15 04:40:57.847277475 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:57.849 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.038 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.039 [INFO][5298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.141 [INFO][5298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.164 [INFO][5298] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.190 [INFO][5298] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.207 [INFO][5298] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.224 [INFO][5298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.227 [INFO][5298] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.244 [INFO][5298] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694 Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.282 [INFO][5298] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.327 [INFO][5298] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.199/26] block=192.168.45.192/26 handle="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.327 [INFO][5298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.199/26] handle="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" host="ip-172-31-22-130" Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.327 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:40:58.559239 containerd[2009]: 2025-07-15 04:40:58.327 [INFO][5298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.199/26] IPv6=[] ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" HandleID="k8s-pod-network.943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Workload="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" Jul 15 04:40:58.561513 containerd[2009]: 2025-07-15 04:40:58.375 [INFO][5179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0", GenerateName:"calico-kube-controllers-5fc5d88d78-", Namespace:"calico-system", SelfLink:"", UID:"c2fbd100-bc42-468d-a8e5-9c4e6ab30466", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc5d88d78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"calico-kube-controllers-5fc5d88d78-wgx85", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali605657c644c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:58.561513 containerd[2009]: 2025-07-15 04:40:58.378 [INFO][5179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.199/32] ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" Jul 15 04:40:58.561513 containerd[2009]: 2025-07-15 04:40:58.380 [INFO][5179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali605657c644c ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" Jul 15 04:40:58.561513 containerd[2009]: 2025-07-15 04:40:58.423 [INFO][5179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" Jul 15 04:40:58.561513 containerd[2009]: 2025-07-15 04:40:58.428 [INFO][5179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0", GenerateName:"calico-kube-controllers-5fc5d88d78-", Namespace:"calico-system", SelfLink:"", UID:"c2fbd100-bc42-468d-a8e5-9c4e6ab30466", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc5d88d78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694", Pod:"calico-kube-controllers-5fc5d88d78-wgx85", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali605657c644c", MAC:"ea:18:ab:55:04:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:40:58.561513 containerd[2009]: 2025-07-15 04:40:58.537 [INFO][5179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" Namespace="calico-system" Pod="calico-kube-controllers-5fc5d88d78-wgx85" WorkloadEndpoint="ip--172--31--22--130-k8s-calico--kube--controllers--5fc5d88d78--wgx85-eth0" Jul 15 04:40:58.660315 containerd[2009]: time="2025-07-15T04:40:58.659979295Z" level=info msg="connecting to shim 943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694" address="unix:///run/containerd/s/032dc91a1ab3304a50315b993771b68622fdacd1e68c8820dbeaa1d50ad450c8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:40:58.736588 systemd[1]: Started cri-containerd-b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6.scope - libcontainer container b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6. Jul 15 04:40:58.829006 systemd[1]: Started cri-containerd-943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694.scope - libcontainer container 943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694. Jul 15 04:40:59.250316 containerd[2009]: time="2025-07-15T04:40:59.250227425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddc7c5889-jr9jp,Uid:bb3d31a1-cb17-4c13-8521-539eb31aad4e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6\"" Jul 15 04:40:59.347915 kubelet[3606]: I0715 04:40:59.347714 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6lqdq" podStartSLOduration=46.347691029 podStartE2EDuration="46.347691029s" podCreationTimestamp="2025-07-15 04:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:40:59.302339268 +0000 UTC m=+51.774530155" watchObservedRunningTime="2025-07-15 04:40:59.347691029 +0000 UTC m=+51.819881880" Jul 15 04:40:59.413495 containerd[2009]: time="2025-07-15T04:40:59.413339169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc5d88d78-wgx85,Uid:c2fbd100-bc42-468d-a8e5-9c4e6ab30466,Namespace:calico-system,Attempt:0,} returns sandbox id \"943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694\"" Jul 15 04:40:59.578875 systemd-networkd[1869]: cali605657c644c: Gained IPv6LL Jul 15 04:40:59.782960 containerd[2009]: time="2025-07-15T04:40:59.782686952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc67v,Uid:2d26261d-e960-406a-9b63-17a87a2b10d4,Namespace:calico-system,Attempt:0,}" Jul 15 04:40:59.836682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539784135.mount: Deactivated successfully. Jul 15 04:40:59.885327 containerd[2009]: time="2025-07-15T04:40:59.884626741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:59.891708 containerd[2009]: time="2025-07-15T04:40:59.891588910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 04:40:59.899126 containerd[2009]: time="2025-07-15T04:40:59.898897142Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:59.910502 containerd[2009]: time="2025-07-15T04:40:59.910352647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:59.916313 containerd[2009]: time="2025-07-15T04:40:59.915193126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.947411947s" Jul 15 04:40:59.916313 containerd[2009]: time="2025-07-15T04:40:59.915254931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 04:40:59.923913 containerd[2009]: time="2025-07-15T04:40:59.923866683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:40:59.926332 containerd[2009]: time="2025-07-15T04:40:59.924561291Z" level=info msg="CreateContainer within sandbox \"719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 04:40:59.949190 containerd[2009]: time="2025-07-15T04:40:59.949136038Z" level=info msg="Container 61ab480e168c9cc0d9ecf2271bb638fdeed071867a137c2f2eb87d6fbebe9cd3: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:40:59.961625 systemd-networkd[1869]: cali4b0db169ecf: Gained IPv6LL Jul 15 04:40:59.994030 containerd[2009]: time="2025-07-15T04:40:59.993451901Z" level=info msg="CreateContainer within sandbox \"719adc453bb10e43f663e77102774a5f560452798712add99a2d6bdced44d680\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"61ab480e168c9cc0d9ecf2271bb638fdeed071867a137c2f2eb87d6fbebe9cd3\"" Jul 15 04:40:59.997370 containerd[2009]: time="2025-07-15T04:40:59.996431427Z" level=info msg="StartContainer for \"61ab480e168c9cc0d9ecf2271bb638fdeed071867a137c2f2eb87d6fbebe9cd3\"" Jul 15 04:41:00.019764 containerd[2009]: time="2025-07-15T04:41:00.017265940Z" level=info msg="connecting to shim 61ab480e168c9cc0d9ecf2271bb638fdeed071867a137c2f2eb87d6fbebe9cd3" address="unix:///run/containerd/s/cdf454d6d7c16a39b2f1d34baf4a89f8d3a0240a27d06c1ab538d5cbc78fab86" protocol=ttrpc version=3 Jul 15 04:41:00.121676 systemd[1]: Started cri-containerd-61ab480e168c9cc0d9ecf2271bb638fdeed071867a137c2f2eb87d6fbebe9cd3.scope - libcontainer container 61ab480e168c9cc0d9ecf2271bb638fdeed071867a137c2f2eb87d6fbebe9cd3. Jul 15 04:41:00.213400 systemd-networkd[1869]: caliecc2a1ba98b: Link UP Jul 15 04:41:00.214734 systemd-networkd[1869]: caliecc2a1ba98b: Gained carrier Jul 15 04:41:00.275224 systemd[1]: Started sshd@9-172.31.22.130:22-139.178.89.65:46808.service - OpenSSH per-connection server daemon (139.178.89.65:46808). Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:40:59.967 [INFO][5560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0 csi-node-driver- calico-system 2d26261d-e960-406a-9b63-17a87a2b10d4 716 0 2025-07-15 04:40:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-130 csi-node-driver-gc67v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliecc2a1ba98b [] [] }} ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:40:59.967 [INFO][5560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.109 [INFO][5579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" HandleID="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Workload="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.110 [INFO][5579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" HandleID="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Workload="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-130", "pod":"csi-node-driver-gc67v", "timestamp":"2025-07-15 04:41:00.109844592 +0000 UTC"}, Hostname:"ip-172-31-22-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.110 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.111 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.111 [INFO][5579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-130' Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.137 [INFO][5579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.151 [INFO][5579] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.161 [INFO][5579] ipam/ipam.go 511: Trying affinity for 192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.165 [INFO][5579] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.170 [INFO][5579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.170 [INFO][5579] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.175 [INFO][5579] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.184 [INFO][5579] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.200 [INFO][5579] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.200/26] block=192.168.45.192/26 handle="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.201 [INFO][5579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.200/26] handle="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" host="ip-172-31-22-130" Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.202 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:41:00.298634 containerd[2009]: 2025-07-15 04:41:00.202 [INFO][5579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.200/26] IPv6=[] ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" HandleID="k8s-pod-network.d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Workload="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" Jul 15 04:41:00.300103 containerd[2009]: 2025-07-15 04:41:00.206 [INFO][5560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d26261d-e960-406a-9b63-17a87a2b10d4", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"", Pod:"csi-node-driver-gc67v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliecc2a1ba98b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:00.300103 containerd[2009]: 2025-07-15 04:41:00.206 [INFO][5560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.200/32] ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" Jul 15 04:41:00.300103 containerd[2009]: 2025-07-15 04:41:00.206 [INFO][5560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecc2a1ba98b ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" Jul 15 04:41:00.300103 containerd[2009]: 2025-07-15 04:41:00.217 [INFO][5560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" Jul 15 04:41:00.300103 containerd[2009]: 2025-07-15 04:41:00.219 [INFO][5560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d26261d-e960-406a-9b63-17a87a2b10d4", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 40, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-130", ContainerID:"d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc", Pod:"csi-node-driver-gc67v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliecc2a1ba98b", MAC:"82:34:95:4f:91:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:41:00.300103 containerd[2009]: 2025-07-15 04:41:00.284 [INFO][5560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" Namespace="calico-system" Pod="csi-node-driver-gc67v" WorkloadEndpoint="ip--172--31--22--130-k8s-csi--node--driver--gc67v-eth0" Jul 15 04:41:00.436601 containerd[2009]: time="2025-07-15T04:41:00.436361693Z" level=info msg="connecting to shim d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc" address="unix:///run/containerd/s/f52abf29f0f10625f56800800ac1bcede27f8e572dac71428f83590bf1cfc574" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:00.513718 containerd[2009]: time="2025-07-15T04:41:00.513396192Z" level=info msg="StartContainer for \"61ab480e168c9cc0d9ecf2271bb638fdeed071867a137c2f2eb87d6fbebe9cd3\" returns successfully" Jul 15 04:41:00.567017 systemd[1]: Started cri-containerd-d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc.scope - libcontainer container d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc. Jul 15 04:41:00.602091 sshd[5610]: Accepted publickey for core from 139.178.89.65 port 46808 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:00.614273 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:00.641707 systemd-logind[1980]: New session 10 of user core. Jul 15 04:41:00.648819 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:41:00.737681 containerd[2009]: time="2025-07-15T04:41:00.737459657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc67v,Uid:2d26261d-e960-406a-9b63-17a87a2b10d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc\"" Jul 15 04:41:00.969810 systemd-networkd[1869]: vxlan.calico: Link UP Jul 15 04:41:00.969831 systemd-networkd[1869]: vxlan.calico: Gained carrier Jul 15 04:41:01.051869 (udev-worker)[4675]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:41:01.179116 sshd[5679]: Connection closed by 139.178.89.65 port 46808 Jul 15 04:41:01.179377 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:01.197614 systemd[1]: sshd@9-172.31.22.130:22-139.178.89.65:46808.service: Deactivated successfully. Jul 15 04:41:01.207021 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:41:01.213673 systemd-logind[1980]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:41:01.220404 systemd-logind[1980]: Removed session 10. Jul 15 04:41:01.753512 systemd-networkd[1869]: caliecc2a1ba98b: Gained IPv6LL Jul 15 04:41:02.650518 systemd-networkd[1869]: vxlan.calico: Gained IPv6LL Jul 15 04:41:04.804596 containerd[2009]: time="2025-07-15T04:41:04.804525363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:04.807024 containerd[2009]: time="2025-07-15T04:41:04.806913285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 04:41:04.809694 containerd[2009]: time="2025-07-15T04:41:04.809579767Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:04.818214 containerd[2009]: time="2025-07-15T04:41:04.817890230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:04.819782 containerd[2009]: time="2025-07-15T04:41:04.819729977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 4.895512277s" Jul 15 04:41:04.820103 containerd[2009]: time="2025-07-15T04:41:04.819962865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:41:04.824668 containerd[2009]: time="2025-07-15T04:41:04.824579751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 04:41:04.828650 containerd[2009]: time="2025-07-15T04:41:04.828563198Z" level=info msg="CreateContainer within sandbox \"89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:41:04.850338 containerd[2009]: time="2025-07-15T04:41:04.849875085Z" level=info msg="Container f1c06edbba4d89679b4011eb9709b5ea100df4f4c3df7e1a5c7cb794c4de1eab: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:04.886937 containerd[2009]: time="2025-07-15T04:41:04.886842584Z" level=info msg="CreateContainer within sandbox \"89568379186bf9a98c66b1be8b2d4a88f539004e85e43e6c3545ceee921be7fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1c06edbba4d89679b4011eb9709b5ea100df4f4c3df7e1a5c7cb794c4de1eab\"" Jul 15 04:41:04.893390 containerd[2009]: time="2025-07-15T04:41:04.893336784Z" level=info msg="StartContainer for \"f1c06edbba4d89679b4011eb9709b5ea100df4f4c3df7e1a5c7cb794c4de1eab\"" Jul 15 04:41:04.900221 containerd[2009]: time="2025-07-15T04:41:04.900148596Z" level=info msg="connecting to shim f1c06edbba4d89679b4011eb9709b5ea100df4f4c3df7e1a5c7cb794c4de1eab" address="unix:///run/containerd/s/cb08d1e2c71643a00fe8fb8367469bf182b1c6f6102c5e1f4901a4288b262707" protocol=ttrpc version=3 Jul 15 04:41:04.947464 ntpd[1973]: Listen normally on 7 vxlan.calico 192.168.45.192:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 7 vxlan.calico 192.168.45.192:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 8 calid54dbee5a8f [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 9 cali412b7d016cf [fe80::ecee:eeff:feee:eeee%5]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 10 cali17e15bcc115 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 11 calid0bc130e7c1 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 12 cali2c95a7148e3 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 13 cali4b0db169ecf [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 14 cali605657c644c [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 15 caliecc2a1ba98b [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 04:41:04.950028 ntpd[1973]: 15 Jul 04:41:04 ntpd[1973]: Listen normally on 16 vxlan.calico [fe80::6498:3bff:fe31:903a%12]:123 Jul 15 04:41:04.947636 ntpd[1973]: Listen normally on 8 calid54dbee5a8f [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 04:41:04.947714 ntpd[1973]: Listen normally on 9 cali412b7d016cf [fe80::ecee:eeff:feee:eeee%5]:123 Jul 15 04:41:04.947780 ntpd[1973]: Listen normally on 10 cali17e15bcc115 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 15 04:41:04.947851 ntpd[1973]: Listen normally on 11 calid0bc130e7c1 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 15 04:41:04.947914 ntpd[1973]: Listen normally on 12 cali2c95a7148e3 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 04:41:04.947983 ntpd[1973]: Listen normally on 13 cali4b0db169ecf [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 04:41:04.948047 ntpd[1973]: Listen normally on 14 cali605657c644c [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 04:41:04.948109 ntpd[1973]: Listen normally on 15 caliecc2a1ba98b [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 04:41:04.948170 ntpd[1973]: Listen normally on 16 vxlan.calico [fe80::6498:3bff:fe31:903a%12]:123 Jul 15 04:41:04.995717 systemd[1]: Started cri-containerd-f1c06edbba4d89679b4011eb9709b5ea100df4f4c3df7e1a5c7cb794c4de1eab.scope - libcontainer container f1c06edbba4d89679b4011eb9709b5ea100df4f4c3df7e1a5c7cb794c4de1eab. Jul 15 04:41:05.118442 containerd[2009]: time="2025-07-15T04:41:05.118377460Z" level=info msg="StartContainer for \"f1c06edbba4d89679b4011eb9709b5ea100df4f4c3df7e1a5c7cb794c4de1eab\" returns successfully" Jul 15 04:41:05.395825 kubelet[3606]: I0715 04:41:05.395326 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56678d84f4-nwpf4" podStartSLOduration=6.887552264 podStartE2EDuration="13.395274855s" podCreationTimestamp="2025-07-15 04:40:52 +0000 UTC" firstStartedPulling="2025-07-15 04:40:53.412696297 +0000 UTC m=+45.884887148" lastFinishedPulling="2025-07-15 04:40:59.920418792 +0000 UTC m=+52.392609739" observedRunningTime="2025-07-15 04:41:01.359264169 +0000 UTC m=+53.831455068" watchObservedRunningTime="2025-07-15 04:41:05.395274855 +0000 UTC m=+57.867465730" Jul 15 04:41:06.245487 systemd[1]: Started sshd@10-172.31.22.130:22-139.178.89.65:46822.service - OpenSSH per-connection server daemon (139.178.89.65:46822). Jul 15 04:41:06.508823 sshd[5811]: Accepted publickey for core from 139.178.89.65 port 46822 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:06.511791 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:06.521341 systemd-logind[1980]: New session 11 of user core. Jul 15 04:41:06.528583 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:41:06.879071 sshd[5814]: Connection closed by 139.178.89.65 port 46822 Jul 15 04:41:06.878892 sshd-session[5811]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:06.893639 systemd[1]: sshd@10-172.31.22.130:22-139.178.89.65:46822.service: Deactivated successfully. Jul 15 04:41:06.901161 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:41:06.903395 systemd-logind[1980]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:41:06.909756 systemd-logind[1980]: Removed session 11. Jul 15 04:41:07.835565 containerd[2009]: time="2025-07-15T04:41:07.835491535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" id:\"3e174e324441da814867252fda82316d97f95d8f28cde16ecef88c223d81ba26\" pid:5848 exited_at:{seconds:1752554467 nanos:834887265}" Jul 15 04:41:07.873982 kubelet[3606]: I0715 04:41:07.873074 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jb4bp" podStartSLOduration=33.323162898 podStartE2EDuration="42.87305113s" podCreationTimestamp="2025-07-15 04:40:25 +0000 UTC" firstStartedPulling="2025-07-15 04:40:55.272357847 +0000 UTC m=+47.744548710" lastFinishedPulling="2025-07-15 04:41:04.822246091 +0000 UTC m=+57.294436942" observedRunningTime="2025-07-15 04:41:05.398716365 +0000 UTC m=+57.870907228" watchObservedRunningTime="2025-07-15 04:41:07.87305113 +0000 UTC m=+60.345241993" Jul 15 04:41:10.090282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031057267.mount: Deactivated successfully. Jul 15 04:41:10.954483 containerd[2009]: time="2025-07-15T04:41:10.954398195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:10.956678 containerd[2009]: time="2025-07-15T04:41:10.956606003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 04:41:10.957697 containerd[2009]: time="2025-07-15T04:41:10.957619148Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:10.962411 containerd[2009]: time="2025-07-15T04:41:10.962339350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:10.964076 containerd[2009]: time="2025-07-15T04:41:10.963770415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 6.139111743s" Jul 15 04:41:10.964076 containerd[2009]: time="2025-07-15T04:41:10.963821365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 04:41:10.967346 containerd[2009]: time="2025-07-15T04:41:10.966877796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:41:10.969343 containerd[2009]: time="2025-07-15T04:41:10.969257574Z" level=info msg="CreateContainer within sandbox \"2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 04:41:10.988595 containerd[2009]: time="2025-07-15T04:41:10.988515788Z" level=info msg="Container 2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:11.027154 containerd[2009]: time="2025-07-15T04:41:11.026947574Z" level=info msg="CreateContainer within sandbox \"2b7560dfd68f4932f4f87423b39884a338bab5fb465d70a63fbc3381522ba480\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\"" Jul 15 04:41:11.027991 containerd[2009]: time="2025-07-15T04:41:11.027927976Z" level=info msg="StartContainer for \"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\"" Jul 15 04:41:11.030755 containerd[2009]: time="2025-07-15T04:41:11.030617786Z" level=info msg="connecting to shim 2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e" address="unix:///run/containerd/s/745319dc3572e0054af176f2aeeef6631aea4a8c5d4508d0072a5114fd963d96" protocol=ttrpc version=3 Jul 15 04:41:11.093764 systemd[1]: Started cri-containerd-2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e.scope - libcontainer container 2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e. Jul 15 04:41:11.207546 containerd[2009]: time="2025-07-15T04:41:11.206669737Z" level=info msg="StartContainer for \"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" returns successfully" Jul 15 04:41:11.342810 containerd[2009]: time="2025-07-15T04:41:11.342729877Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:11.345352 containerd[2009]: time="2025-07-15T04:41:11.344417469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 04:41:11.350554 containerd[2009]: time="2025-07-15T04:41:11.350451135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 383.508271ms" Jul 15 04:41:11.350554 containerd[2009]: time="2025-07-15T04:41:11.350528604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:41:11.354677 containerd[2009]: time="2025-07-15T04:41:11.354474078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 04:41:11.361324 containerd[2009]: time="2025-07-15T04:41:11.361208097Z" level=info msg="CreateContainer within sandbox \"b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:41:11.374091 containerd[2009]: time="2025-07-15T04:41:11.372532041Z" level=info msg="Container 5b4843f29539e4be4a593017f78e56e34830df1a5693d8b24504094d0f017f59: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:11.388196 containerd[2009]: time="2025-07-15T04:41:11.387933308Z" level=info msg="CreateContainer within sandbox \"b2c0933744b090ccd1c34fb6e128b897c274f9904f193d2cbd1d47d77802c9b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b4843f29539e4be4a593017f78e56e34830df1a5693d8b24504094d0f017f59\"" Jul 15 04:41:11.392205 containerd[2009]: time="2025-07-15T04:41:11.392154285Z" level=info msg="StartContainer for \"5b4843f29539e4be4a593017f78e56e34830df1a5693d8b24504094d0f017f59\"" Jul 15 04:41:11.399275 containerd[2009]: time="2025-07-15T04:41:11.399096364Z" level=info msg="connecting to shim 5b4843f29539e4be4a593017f78e56e34830df1a5693d8b24504094d0f017f59" address="unix:///run/containerd/s/2e3291955fd48c82593e5dbaa4fef7ca1df00efec16fb9a95a0675402cfbc806" protocol=ttrpc version=3 Jul 15 04:41:11.463763 kubelet[3606]: I0715 04:41:11.463557 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-hx5c6" podStartSLOduration=27.063187087 podStartE2EDuration="39.46353203s" podCreationTimestamp="2025-07-15 04:40:32 +0000 UTC" firstStartedPulling="2025-07-15 04:40:58.565207672 +0000 UTC m=+51.037398523" lastFinishedPulling="2025-07-15 04:41:10.965552531 +0000 UTC m=+63.437743466" observedRunningTime="2025-07-15 04:41:11.462851575 +0000 UTC m=+63.935042450" watchObservedRunningTime="2025-07-15 04:41:11.46353203 +0000 UTC m=+63.935722905" Jul 15 04:41:11.479775 systemd[1]: Started cri-containerd-5b4843f29539e4be4a593017f78e56e34830df1a5693d8b24504094d0f017f59.scope - libcontainer container 5b4843f29539e4be4a593017f78e56e34830df1a5693d8b24504094d0f017f59. Jul 15 04:41:11.626780 containerd[2009]: time="2025-07-15T04:41:11.626702665Z" level=info msg="StartContainer for \"5b4843f29539e4be4a593017f78e56e34830df1a5693d8b24504094d0f017f59\" returns successfully" Jul 15 04:41:11.711828 containerd[2009]: time="2025-07-15T04:41:11.711765852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"6c9ad92e1d7fd001cc3ceb4e7391649c67a32ed73ac244d53b6f621bd4d969b8\" pid:5944 exit_status:1 exited_at:{seconds:1752554471 nanos:710924593}" Jul 15 04:41:11.912717 systemd[1]: Started sshd@11-172.31.22.130:22-139.178.89.65:48042.service - OpenSSH per-connection server daemon (139.178.89.65:48042). Jul 15 04:41:12.120225 sshd[5979]: Accepted publickey for core from 139.178.89.65 port 48042 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:12.123416 sshd-session[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:12.141100 systemd-logind[1980]: New session 12 of user core. Jul 15 04:41:12.150608 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:41:12.530245 kubelet[3606]: I0715 04:41:12.529721 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ddc7c5889-jr9jp" podStartSLOduration=35.440790765 podStartE2EDuration="47.529586352s" podCreationTimestamp="2025-07-15 04:40:25 +0000 UTC" firstStartedPulling="2025-07-15 04:40:59.264144399 +0000 UTC m=+51.736335262" lastFinishedPulling="2025-07-15 04:41:11.352939902 +0000 UTC m=+63.825130849" observedRunningTime="2025-07-15 04:41:12.526676811 +0000 UTC m=+64.998867722" watchObservedRunningTime="2025-07-15 04:41:12.529586352 +0000 UTC m=+65.001777227" Jul 15 04:41:12.558435 sshd[5982]: Connection closed by 139.178.89.65 port 48042 Jul 15 04:41:12.560438 sshd-session[5979]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:12.574077 systemd[1]: sshd@11-172.31.22.130:22-139.178.89.65:48042.service: Deactivated successfully. Jul 15 04:41:12.579165 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:41:12.588474 systemd-logind[1980]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:41:12.613761 systemd[1]: Started sshd@12-172.31.22.130:22-139.178.89.65:48050.service - OpenSSH per-connection server daemon (139.178.89.65:48050). Jul 15 04:41:12.621896 systemd-logind[1980]: Removed session 12. Jul 15 04:41:12.936844 sshd[6013]: Accepted publickey for core from 139.178.89.65 port 48050 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:12.943836 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:12.963164 systemd-logind[1980]: New session 13 of user core. Jul 15 04:41:12.967683 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:41:13.457466 kubelet[3606]: I0715 04:41:13.457203 3606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:41:13.558825 sshd[6023]: Connection closed by 139.178.89.65 port 48050 Jul 15 04:41:13.558533 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:13.573863 systemd[1]: sshd@12-172.31.22.130:22-139.178.89.65:48050.service: Deactivated successfully. Jul 15 04:41:13.581826 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:41:13.589341 systemd-logind[1980]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:41:13.647920 systemd[1]: Started sshd@13-172.31.22.130:22-139.178.89.65:48056.service - OpenSSH per-connection server daemon (139.178.89.65:48056). Jul 15 04:41:13.652229 systemd-logind[1980]: Removed session 13. Jul 15 04:41:13.678117 containerd[2009]: time="2025-07-15T04:41:13.678055321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"5179c03eed5c103b1b2e814fa9cdaecd4c395bb0476dc33305fee8001ed6e342\" pid:6002 exit_status:1 exited_at:{seconds:1752554473 nanos:677222314}" Jul 15 04:41:13.949135 sshd[6040]: Accepted publickey for core from 139.178.89.65 port 48056 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:13.957057 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:13.977655 systemd-logind[1980]: New session 14 of user core. Jul 15 04:41:13.985557 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:41:14.516416 sshd[6062]: Connection closed by 139.178.89.65 port 48056 Jul 15 04:41:14.517384 sshd-session[6040]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:14.531797 systemd[1]: sshd@13-172.31.22.130:22-139.178.89.65:48056.service: Deactivated successfully. Jul 15 04:41:14.539700 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:41:14.544487 systemd-logind[1980]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:41:14.548754 systemd-logind[1980]: Removed session 14. Jul 15 04:41:14.640813 containerd[2009]: time="2025-07-15T04:41:14.640650754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"a55f463f1fa355c2ad7534f0e7e8a1436ca52d4ec8fa7929a2dddafa1711e183\" pid:6055 exit_status:1 exited_at:{seconds:1752554474 nanos:635889052}" Jul 15 04:41:15.605069 containerd[2009]: time="2025-07-15T04:41:15.604743829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:15.607192 containerd[2009]: time="2025-07-15T04:41:15.606793639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 04:41:15.609129 containerd[2009]: time="2025-07-15T04:41:15.608940302Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:15.616448 containerd[2009]: time="2025-07-15T04:41:15.615806735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:15.620320 containerd[2009]: time="2025-07-15T04:41:15.617861187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.263181651s" Jul 15 04:41:15.620320 containerd[2009]: time="2025-07-15T04:41:15.617929793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 04:41:15.624759 containerd[2009]: time="2025-07-15T04:41:15.624605053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 04:41:15.673529 containerd[2009]: time="2025-07-15T04:41:15.673454696Z" level=info msg="CreateContainer within sandbox \"943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 04:41:15.692868 containerd[2009]: time="2025-07-15T04:41:15.691492617Z" level=info msg="Container a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:15.706028 containerd[2009]: time="2025-07-15T04:41:15.705933537Z" level=info msg="CreateContainer within sandbox \"943204c09ee71833bc61ee38e66bd9fcf57c63321eee7fd83eb2ecb079994694\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\"" Jul 15 04:41:15.707002 containerd[2009]: time="2025-07-15T04:41:15.706956481Z" level=info msg="StartContainer for \"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\"" Jul 15 04:41:15.715382 containerd[2009]: time="2025-07-15T04:41:15.713574615Z" level=info msg="connecting to shim a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c" address="unix:///run/containerd/s/032dc91a1ab3304a50315b993771b68622fdacd1e68c8820dbeaa1d50ad450c8" protocol=ttrpc version=3 Jul 15 04:41:15.789820 systemd[1]: Started cri-containerd-a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c.scope - libcontainer container a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c. Jul 15 04:41:15.914491 containerd[2009]: time="2025-07-15T04:41:15.914242499Z" level=info msg="StartContainer for \"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\" returns successfully" Jul 15 04:41:16.506868 kubelet[3606]: I0715 04:41:16.505986 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fc5d88d78-wgx85" podStartSLOduration=27.299731226 podStartE2EDuration="43.505818917s" podCreationTimestamp="2025-07-15 04:40:33 +0000 UTC" firstStartedPulling="2025-07-15 04:40:59.417410952 +0000 UTC m=+51.889601803" lastFinishedPulling="2025-07-15 04:41:15.623498547 +0000 UTC m=+68.095689494" observedRunningTime="2025-07-15 04:41:16.505853915 +0000 UTC m=+68.978044802" watchObservedRunningTime="2025-07-15 04:41:16.505818917 +0000 UTC m=+68.978009792" Jul 15 04:41:16.562552 containerd[2009]: time="2025-07-15T04:41:16.562447424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\" id:\"0874026c1b33668510d6a8fb5a88d8f2b8c8c938aad515b062784ce10ef43b50\" pid:6134 exited_at:{seconds:1752554476 nanos:561436594}" Jul 15 04:41:17.110544 containerd[2009]: time="2025-07-15T04:41:17.110226755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:17.112532 containerd[2009]: time="2025-07-15T04:41:17.111881747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 04:41:17.113725 containerd[2009]: time="2025-07-15T04:41:17.113573729Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:17.118811 containerd[2009]: time="2025-07-15T04:41:17.118722765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:17.120320 containerd[2009]: time="2025-07-15T04:41:17.120086783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.495422755s" Jul 15 04:41:17.120320 containerd[2009]: time="2025-07-15T04:41:17.120143502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 04:41:17.125425 containerd[2009]: time="2025-07-15T04:41:17.125360713Z" level=info msg="CreateContainer within sandbox \"d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 04:41:17.141617 containerd[2009]: time="2025-07-15T04:41:17.141531059Z" level=info msg="Container 4b6475bb8913537c5b5f6771715a18545cee607252e974b01a405ef2361f81bc: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:17.162261 containerd[2009]: time="2025-07-15T04:41:17.162182772Z" level=info msg="CreateContainer within sandbox \"d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4b6475bb8913537c5b5f6771715a18545cee607252e974b01a405ef2361f81bc\"" Jul 15 04:41:17.163829 containerd[2009]: time="2025-07-15T04:41:17.163660685Z" level=info msg="StartContainer for \"4b6475bb8913537c5b5f6771715a18545cee607252e974b01a405ef2361f81bc\"" Jul 15 04:41:17.167425 containerd[2009]: time="2025-07-15T04:41:17.167372624Z" level=info msg="connecting to shim 4b6475bb8913537c5b5f6771715a18545cee607252e974b01a405ef2361f81bc" address="unix:///run/containerd/s/f52abf29f0f10625f56800800ac1bcede27f8e572dac71428f83590bf1cfc574" protocol=ttrpc version=3 Jul 15 04:41:17.215609 systemd[1]: Started cri-containerd-4b6475bb8913537c5b5f6771715a18545cee607252e974b01a405ef2361f81bc.scope - libcontainer container 4b6475bb8913537c5b5f6771715a18545cee607252e974b01a405ef2361f81bc. Jul 15 04:41:17.292535 containerd[2009]: time="2025-07-15T04:41:17.292451334Z" level=info msg="StartContainer for \"4b6475bb8913537c5b5f6771715a18545cee607252e974b01a405ef2361f81bc\" returns successfully" Jul 15 04:41:17.297978 containerd[2009]: time="2025-07-15T04:41:17.297497833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 04:41:18.904408 containerd[2009]: time="2025-07-15T04:41:18.903253143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:18.905482 containerd[2009]: time="2025-07-15T04:41:18.905388627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 04:41:18.906304 containerd[2009]: time="2025-07-15T04:41:18.906236195Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:18.912549 containerd[2009]: time="2025-07-15T04:41:18.912477788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:18.916622 containerd[2009]: time="2025-07-15T04:41:18.916542519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.618308937s" Jul 15 04:41:18.916923 containerd[2009]: time="2025-07-15T04:41:18.916871874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 04:41:18.924980 containerd[2009]: time="2025-07-15T04:41:18.924916526Z" level=info msg="CreateContainer within sandbox \"d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 04:41:18.947648 containerd[2009]: time="2025-07-15T04:41:18.947509048Z" level=info msg="Container a174ba78d05d50c2dd67448e92945e00d5dda60d8e81b5f6b74e867f73bef4e3: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:18.976155 containerd[2009]: time="2025-07-15T04:41:18.976094204Z" level=info msg="CreateContainer within sandbox \"d1a8bcecbe856e7a8d98e37116fd6fc66f5a058ebf30f266cf3cd9d77c998acc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a174ba78d05d50c2dd67448e92945e00d5dda60d8e81b5f6b74e867f73bef4e3\"" Jul 15 04:41:18.984548 containerd[2009]: time="2025-07-15T04:41:18.983779252Z" level=info msg="StartContainer for \"a174ba78d05d50c2dd67448e92945e00d5dda60d8e81b5f6b74e867f73bef4e3\"" Jul 15 04:41:18.993260 containerd[2009]: time="2025-07-15T04:41:18.993206128Z" level=info msg="connecting to shim a174ba78d05d50c2dd67448e92945e00d5dda60d8e81b5f6b74e867f73bef4e3" address="unix:///run/containerd/s/f52abf29f0f10625f56800800ac1bcede27f8e572dac71428f83590bf1cfc574" protocol=ttrpc version=3 Jul 15 04:41:19.035791 systemd[1]: Started cri-containerd-a174ba78d05d50c2dd67448e92945e00d5dda60d8e81b5f6b74e867f73bef4e3.scope - libcontainer container a174ba78d05d50c2dd67448e92945e00d5dda60d8e81b5f6b74e867f73bef4e3. Jul 15 04:41:19.140804 containerd[2009]: time="2025-07-15T04:41:19.140643969Z" level=info msg="StartContainer for \"a174ba78d05d50c2dd67448e92945e00d5dda60d8e81b5f6b74e867f73bef4e3\" returns successfully" Jul 15 04:41:19.560877 systemd[1]: Started sshd@14-172.31.22.130:22-139.178.89.65:48356.service - OpenSSH per-connection server daemon (139.178.89.65:48356). Jul 15 04:41:19.770858 sshd[6219]: Accepted publickey for core from 139.178.89.65 port 48356 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:19.774952 sshd-session[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:19.787373 systemd-logind[1980]: New session 15 of user core. Jul 15 04:41:19.794572 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:41:19.969171 kubelet[3606]: I0715 04:41:19.969112 3606 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 04:41:19.970068 kubelet[3606]: I0715 04:41:19.969185 3606 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 04:41:20.097584 sshd[6222]: Connection closed by 139.178.89.65 port 48356 Jul 15 04:41:20.098483 sshd-session[6219]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:20.105502 systemd[1]: sshd@14-172.31.22.130:22-139.178.89.65:48356.service: Deactivated successfully. Jul 15 04:41:20.110205 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:41:20.111930 systemd-logind[1980]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:41:20.116017 systemd-logind[1980]: Removed session 15. Jul 15 04:41:25.137567 systemd[1]: Started sshd@15-172.31.22.130:22-139.178.89.65:48372.service - OpenSSH per-connection server daemon (139.178.89.65:48372). Jul 15 04:41:25.337183 sshd[6244]: Accepted publickey for core from 139.178.89.65 port 48372 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:25.339602 sshd-session[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:25.347835 systemd-logind[1980]: New session 16 of user core. Jul 15 04:41:25.356581 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:41:25.628744 sshd[6247]: Connection closed by 139.178.89.65 port 48372 Jul 15 04:41:25.630070 sshd-session[6244]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:25.638282 systemd[1]: sshd@15-172.31.22.130:22-139.178.89.65:48372.service: Deactivated successfully. Jul 15 04:41:25.642526 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:41:25.645978 systemd-logind[1980]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:41:25.649808 systemd-logind[1980]: Removed session 16. Jul 15 04:41:30.666912 systemd[1]: Started sshd@16-172.31.22.130:22-139.178.89.65:45844.service - OpenSSH per-connection server daemon (139.178.89.65:45844). Jul 15 04:41:30.876890 sshd[6263]: Accepted publickey for core from 139.178.89.65 port 45844 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:30.881593 sshd-session[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:30.896922 systemd-logind[1980]: New session 17 of user core. Jul 15 04:41:30.902921 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:41:31.205798 sshd[6266]: Connection closed by 139.178.89.65 port 45844 Jul 15 04:41:31.207281 sshd-session[6263]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:31.215070 systemd[1]: sshd@16-172.31.22.130:22-139.178.89.65:45844.service: Deactivated successfully. Jul 15 04:41:31.221481 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:41:31.225499 systemd-logind[1980]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:41:31.228626 systemd-logind[1980]: Removed session 17. Jul 15 04:41:35.053112 containerd[2009]: time="2025-07-15T04:41:35.053043276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\" id:\"ac76a00a366fa8a36dabeaee0d13595a1aa7d5314221c955d9444ce7b991f1eb\" pid:6288 exited_at:{seconds:1752554495 nanos:52249921}" Jul 15 04:41:36.247771 systemd[1]: Started sshd@17-172.31.22.130:22-139.178.89.65:45846.service - OpenSSH per-connection server daemon (139.178.89.65:45846). Jul 15 04:41:36.485373 sshd[6298]: Accepted publickey for core from 139.178.89.65 port 45846 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:36.488905 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:36.501156 systemd-logind[1980]: New session 18 of user core. Jul 15 04:41:36.510841 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:41:36.811441 sshd[6301]: Connection closed by 139.178.89.65 port 45846 Jul 15 04:41:36.812841 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:36.820139 systemd[1]: sshd@17-172.31.22.130:22-139.178.89.65:45846.service: Deactivated successfully. Jul 15 04:41:36.823906 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:41:36.829192 systemd-logind[1980]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:41:36.831256 systemd-logind[1980]: Removed session 18. Jul 15 04:41:36.851769 systemd[1]: Started sshd@18-172.31.22.130:22-139.178.89.65:45858.service - OpenSSH per-connection server daemon (139.178.89.65:45858). Jul 15 04:41:37.056689 sshd[6313]: Accepted publickey for core from 139.178.89.65 port 45858 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:37.059537 sshd-session[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:37.073952 systemd-logind[1980]: New session 19 of user core. Jul 15 04:41:37.082621 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:41:37.800578 sshd[6316]: Connection closed by 139.178.89.65 port 45858 Jul 15 04:41:37.803248 sshd-session[6313]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:37.815268 systemd[1]: sshd@18-172.31.22.130:22-139.178.89.65:45858.service: Deactivated successfully. Jul 15 04:41:37.821597 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:41:37.823116 containerd[2009]: time="2025-07-15T04:41:37.822949435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" id:\"abe73b0bd439af9452920bafdd76cf5c263b807178da89963de2162cbfa799f8\" pid:6333 exited_at:{seconds:1752554497 nanos:820128950}" Jul 15 04:41:37.827011 systemd-logind[1980]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:41:37.849759 systemd[1]: Started sshd@19-172.31.22.130:22-139.178.89.65:45866.service - OpenSSH per-connection server daemon (139.178.89.65:45866). Jul 15 04:41:37.853860 systemd-logind[1980]: Removed session 19. Jul 15 04:41:38.056537 sshd[6350]: Accepted publickey for core from 139.178.89.65 port 45866 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:38.059629 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:38.067505 systemd-logind[1980]: New session 20 of user core. Jul 15 04:41:38.075564 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:41:40.898345 kubelet[3606]: I0715 04:41:40.897818 3606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:41:40.990852 kubelet[3606]: I0715 04:41:40.988970 3606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gc67v" podStartSLOduration=49.813504543 podStartE2EDuration="1m7.988943747s" podCreationTimestamp="2025-07-15 04:40:33 +0000 UTC" firstStartedPulling="2025-07-15 04:41:00.744066049 +0000 UTC m=+53.216256912" lastFinishedPulling="2025-07-15 04:41:18.919505265 +0000 UTC m=+71.391696116" observedRunningTime="2025-07-15 04:41:19.551126446 +0000 UTC m=+72.023317321" watchObservedRunningTime="2025-07-15 04:41:40.988943747 +0000 UTC m=+93.461134622" Jul 15 04:41:41.587670 sshd[6353]: Connection closed by 139.178.89.65 port 45866 Jul 15 04:41:41.593263 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:41.608132 systemd-logind[1980]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:41:41.609474 systemd[1]: sshd@19-172.31.22.130:22-139.178.89.65:45866.service: Deactivated successfully. Jul 15 04:41:41.618989 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:41:41.623532 systemd[1]: session-20.scope: Consumed 1.076s CPU time, 81.8M memory peak. Jul 15 04:41:41.643702 systemd[1]: Started sshd@20-172.31.22.130:22-139.178.89.65:32884.service - OpenSSH per-connection server daemon (139.178.89.65:32884). Jul 15 04:41:41.644382 systemd-logind[1980]: Removed session 20. Jul 15 04:41:41.861461 sshd[6370]: Accepted publickey for core from 139.178.89.65 port 32884 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:41.864323 sshd-session[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:41.874451 systemd-logind[1980]: New session 21 of user core. Jul 15 04:41:41.881558 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 04:41:42.539352 sshd[6381]: Connection closed by 139.178.89.65 port 32884 Jul 15 04:41:42.537766 sshd-session[6370]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:42.551440 systemd-logind[1980]: Session 21 logged out. Waiting for processes to exit. Jul 15 04:41:42.552836 systemd[1]: sshd@20-172.31.22.130:22-139.178.89.65:32884.service: Deactivated successfully. Jul 15 04:41:42.562179 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 04:41:42.584363 systemd-logind[1980]: Removed session 21. Jul 15 04:41:42.587725 systemd[1]: Started sshd@21-172.31.22.130:22-139.178.89.65:32890.service - OpenSSH per-connection server daemon (139.178.89.65:32890). Jul 15 04:41:42.808590 sshd[6391]: Accepted publickey for core from 139.178.89.65 port 32890 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:42.812583 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:42.824044 systemd-logind[1980]: New session 22 of user core. Jul 15 04:41:42.828734 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 04:41:43.188525 sshd[6395]: Connection closed by 139.178.89.65 port 32890 Jul 15 04:41:43.189397 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:43.197675 systemd[1]: sshd@21-172.31.22.130:22-139.178.89.65:32890.service: Deactivated successfully. Jul 15 04:41:43.204067 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 04:41:43.211655 systemd-logind[1980]: Session 22 logged out. Waiting for processes to exit. Jul 15 04:41:43.216045 systemd-logind[1980]: Removed session 22. Jul 15 04:41:43.767189 containerd[2009]: time="2025-07-15T04:41:43.766805178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\" id:\"9efe4156e472d0d10e4a2091e9f61563585731910e66e798793f18a1f779cf6f\" pid:6419 exited_at:{seconds:1752554503 nanos:764894366}" Jul 15 04:41:43.871742 containerd[2009]: time="2025-07-15T04:41:43.871626969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"61acc872682289d884341e0e297d2332e72336bd7950a4a9c8b4b18ef1ea5941\" pid:6436 exited_at:{seconds:1752554503 nanos:870933716}" Jul 15 04:41:48.225426 systemd[1]: Started sshd@22-172.31.22.130:22-139.178.89.65:32904.service - OpenSSH per-connection server daemon (139.178.89.65:32904). Jul 15 04:41:48.430187 sshd[6455]: Accepted publickey for core from 139.178.89.65 port 32904 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:48.433431 sshd-session[6455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:48.444118 systemd-logind[1980]: New session 23 of user core. Jul 15 04:41:48.454764 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 04:41:48.514067 containerd[2009]: time="2025-07-15T04:41:48.513527052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"c4eccbf8d085b8772496769c2ef9ae4760475aa54d961996b0f0087c2dab057f\" pid:6470 exited_at:{seconds:1752554508 nanos:513079076}" Jul 15 04:41:48.717348 sshd[6480]: Connection closed by 139.178.89.65 port 32904 Jul 15 04:41:48.718108 sshd-session[6455]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:48.726430 systemd-logind[1980]: Session 23 logged out. Waiting for processes to exit. Jul 15 04:41:48.727703 systemd[1]: sshd@22-172.31.22.130:22-139.178.89.65:32904.service: Deactivated successfully. Jul 15 04:41:48.732852 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 04:41:48.736159 systemd-logind[1980]: Removed session 23. Jul 15 04:41:53.751944 systemd[1]: Started sshd@23-172.31.22.130:22-139.178.89.65:40984.service - OpenSSH per-connection server daemon (139.178.89.65:40984). Jul 15 04:41:53.950971 sshd[6496]: Accepted publickey for core from 139.178.89.65 port 40984 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:53.954523 sshd-session[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:53.967418 systemd-logind[1980]: New session 24 of user core. Jul 15 04:41:53.974829 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 04:41:54.260652 sshd[6499]: Connection closed by 139.178.89.65 port 40984 Jul 15 04:41:54.261163 sshd-session[6496]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:54.271392 systemd[1]: sshd@23-172.31.22.130:22-139.178.89.65:40984.service: Deactivated successfully. Jul 15 04:41:54.272699 systemd-logind[1980]: Session 24 logged out. Waiting for processes to exit. Jul 15 04:41:54.280656 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 04:41:54.291743 systemd-logind[1980]: Removed session 24. Jul 15 04:41:59.311042 systemd[1]: Started sshd@24-172.31.22.130:22-139.178.89.65:51684.service - OpenSSH per-connection server daemon (139.178.89.65:51684). Jul 15 04:41:59.540118 sshd[6512]: Accepted publickey for core from 139.178.89.65 port 51684 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:41:59.545390 sshd-session[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:41:59.558741 systemd-logind[1980]: New session 25 of user core. Jul 15 04:41:59.565643 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 04:41:59.906616 sshd[6515]: Connection closed by 139.178.89.65 port 51684 Jul 15 04:41:59.907749 sshd-session[6512]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:59.920203 systemd[1]: sshd@24-172.31.22.130:22-139.178.89.65:51684.service: Deactivated successfully. Jul 15 04:41:59.926084 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 04:41:59.931271 systemd-logind[1980]: Session 25 logged out. Waiting for processes to exit. Jul 15 04:41:59.937130 systemd-logind[1980]: Removed session 25. Jul 15 04:42:04.952760 systemd[1]: Started sshd@25-172.31.22.130:22-139.178.89.65:51700.service - OpenSSH per-connection server daemon (139.178.89.65:51700). Jul 15 04:42:05.150230 sshd[6528]: Accepted publickey for core from 139.178.89.65 port 51700 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:05.153629 sshd-session[6528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:05.163253 systemd-logind[1980]: New session 26 of user core. Jul 15 04:42:05.172586 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 04:42:05.489910 sshd[6531]: Connection closed by 139.178.89.65 port 51700 Jul 15 04:42:05.490412 sshd-session[6528]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:05.500787 systemd[1]: sshd@25-172.31.22.130:22-139.178.89.65:51700.service: Deactivated successfully. Jul 15 04:42:05.509823 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 04:42:05.512593 systemd-logind[1980]: Session 26 logged out. Waiting for processes to exit. Jul 15 04:42:05.516871 systemd-logind[1980]: Removed session 26. Jul 15 04:42:08.079164 containerd[2009]: time="2025-07-15T04:42:08.079068509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" id:\"9d737b670f8766265f840c2ac4913bf30e72af8fdbbff5f95c918a73a9915166\" pid:6556 exited_at:{seconds:1752554528 nanos:78642135}" Jul 15 04:42:10.526531 systemd[1]: Started sshd@26-172.31.22.130:22-139.178.89.65:44482.service - OpenSSH per-connection server daemon (139.178.89.65:44482). Jul 15 04:42:10.722239 sshd[6571]: Accepted publickey for core from 139.178.89.65 port 44482 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:10.725548 sshd-session[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:10.737726 systemd-logind[1980]: New session 27 of user core. Jul 15 04:42:10.744918 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 04:42:11.017180 sshd[6574]: Connection closed by 139.178.89.65 port 44482 Jul 15 04:42:11.018134 sshd-session[6571]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:11.025718 systemd-logind[1980]: Session 27 logged out. Waiting for processes to exit. Jul 15 04:42:11.027747 systemd[1]: sshd@26-172.31.22.130:22-139.178.89.65:44482.service: Deactivated successfully. Jul 15 04:42:11.033824 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 04:42:11.039210 systemd-logind[1980]: Removed session 27. Jul 15 04:42:13.803410 containerd[2009]: time="2025-07-15T04:42:13.802662790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\" id:\"6900d8ed3e39d0edbb91c203f1fee390e59e1704c9e4531ab8905392d5656a03\" pid:6608 exited_at:{seconds:1752554533 nanos:801365039}" Jul 15 04:42:13.918515 containerd[2009]: time="2025-07-15T04:42:13.918416488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"2fad9583e6cd9066e5ddd817cc3a62b1b60c184082b8b81e83f52ad6576cb4b2\" pid:6612 exited_at:{seconds:1752554533 nanos:917169280}" Jul 15 04:42:16.059443 systemd[1]: Started sshd@27-172.31.22.130:22-139.178.89.65:44494.service - OpenSSH per-connection server daemon (139.178.89.65:44494). Jul 15 04:42:16.272259 sshd[6636]: Accepted publickey for core from 139.178.89.65 port 44494 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:16.276563 sshd-session[6636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:16.289711 systemd-logind[1980]: New session 28 of user core. Jul 15 04:42:16.301586 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 15 04:42:16.601809 sshd[6639]: Connection closed by 139.178.89.65 port 44494 Jul 15 04:42:16.602278 sshd-session[6636]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:16.610789 systemd-logind[1980]: Session 28 logged out. Waiting for processes to exit. Jul 15 04:42:16.613179 systemd[1]: sshd@27-172.31.22.130:22-139.178.89.65:44494.service: Deactivated successfully. Jul 15 04:42:16.624246 systemd[1]: session-28.scope: Deactivated successfully. Jul 15 04:42:16.631076 systemd-logind[1980]: Removed session 28. Jul 15 04:42:34.896071 containerd[2009]: time="2025-07-15T04:42:34.896008751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\" id:\"9ff4f1ed8f2d1226174df68a65fe96d20387284513538f67f97dfd04b1460ee6\" pid:6681 exited_at:{seconds:1752554554 nanos:894612410}" Jul 15 04:42:37.837338 containerd[2009]: time="2025-07-15T04:42:37.837224049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" id:\"efbd684a5772b5ab5616dd76101f176f875a2cc668d8d87251969475e0ced742\" pid:6702 exit_status:1 exited_at:{seconds:1752554557 nanos:836635215}" Jul 15 04:42:43.819868 containerd[2009]: time="2025-07-15T04:42:43.819799746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9014f66739ea99c1930fec5c83d20141bda8b5739d036c5712e68fceb3b058c\" id:\"1971ee2b529fceeb41ed23a29634ea0d7d61d6c72b6b06ea41cbd3b3c8d9447d\" pid:6741 exited_at:{seconds:1752554563 nanos:816705918}" Jul 15 04:42:43.930498 containerd[2009]: time="2025-07-15T04:42:43.930426897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"6ce699e8cc55e0e9c78f1042aeef129f40035103ceef69e70da414dfb79a5873\" pid:6758 exited_at:{seconds:1752554563 nanos:929040031}" Jul 15 04:42:48.788878 containerd[2009]: time="2025-07-15T04:42:48.788465236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2599a992a5929402147ff9c32ccb0173306fbddcf219d7c9c33bb8bd07df2d0e\" id:\"78cced05caaffbf907655f4321e492feb3beb5fe871f206f7112d9aa4c1c410e\" pid:6787 exited_at:{seconds:1752554568 nanos:786831484}" Jul 15 04:43:03.225929 systemd[1]: cri-containerd-f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e.scope: Deactivated successfully. Jul 15 04:43:03.227670 systemd[1]: cri-containerd-f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e.scope: Consumed 6.576s CPU time, 57.6M memory peak, 188K read from disk. Jul 15 04:43:03.232617 containerd[2009]: time="2025-07-15T04:43:03.232485748Z" level=info msg="received exit event container_id:\"f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e\" id:\"f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e\" pid:3170 exit_status:1 exited_at:{seconds:1752554583 nanos:231879508}" Jul 15 04:43:03.234859 containerd[2009]: time="2025-07-15T04:43:03.234797380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e\" id:\"f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e\" pid:3170 exit_status:1 exited_at:{seconds:1752554583 nanos:231879508}" Jul 15 04:43:03.282830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e-rootfs.mount: Deactivated successfully. Jul 15 04:43:03.639122 systemd[1]: cri-containerd-283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34.scope: Deactivated successfully. Jul 15 04:43:03.641133 systemd[1]: cri-containerd-283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34.scope: Consumed 28.772s CPU time, 111.4M memory peak, 576K read from disk. Jul 15 04:43:03.644728 containerd[2009]: time="2025-07-15T04:43:03.644667234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34\" id:\"283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34\" pid:3932 exit_status:1 exited_at:{seconds:1752554583 nanos:643852362}" Jul 15 04:43:03.645049 containerd[2009]: time="2025-07-15T04:43:03.644866002Z" level=info msg="received exit event container_id:\"283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34\" id:\"283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34\" pid:3932 exit_status:1 exited_at:{seconds:1752554583 nanos:643852362}" Jul 15 04:43:03.686153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34-rootfs.mount: Deactivated successfully. Jul 15 04:43:03.912400 kubelet[3606]: I0715 04:43:03.912243 3606 scope.go:117] "RemoveContainer" containerID="f38fe7fd0b790a41d80f6d9044a86247252ed9da53ac9d8fa49bf3b832d8f56e" Jul 15 04:43:03.917006 containerd[2009]: time="2025-07-15T04:43:03.916903447Z" level=info msg="CreateContainer within sandbox \"e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 15 04:43:03.918054 kubelet[3606]: I0715 04:43:03.918004 3606 scope.go:117] "RemoveContainer" containerID="283a11f0bc0050acf32b2b7811a4259088da780f1676f879c427210d43e9cf34" Jul 15 04:43:03.921574 containerd[2009]: time="2025-07-15T04:43:03.921506239Z" level=info msg="CreateContainer within sandbox \"7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 04:43:03.944932 containerd[2009]: time="2025-07-15T04:43:03.944862644Z" level=info msg="Container 9cedbed6822633fc59ddf1a44177610dc0ccc2d8ebe8799532b71869df899953: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:03.961023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212027974.mount: Deactivated successfully. Jul 15 04:43:03.962500 containerd[2009]: time="2025-07-15T04:43:03.961461152Z" level=info msg="Container 92e3ff2525627331e4c9b3f1eb1509064aa841fdeaf2fe7fb8e529e3bcf48f15: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:03.972020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047469143.mount: Deactivated successfully. Jul 15 04:43:03.976326 containerd[2009]: time="2025-07-15T04:43:03.976246700Z" level=info msg="CreateContainer within sandbox \"7a21645df9659ff0ee8adcd6b7a34487520430b13a71bfdef89f31b648805f87\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9cedbed6822633fc59ddf1a44177610dc0ccc2d8ebe8799532b71869df899953\"" Jul 15 04:43:03.977936 containerd[2009]: time="2025-07-15T04:43:03.977854784Z" level=info msg="StartContainer for \"9cedbed6822633fc59ddf1a44177610dc0ccc2d8ebe8799532b71869df899953\"" Jul 15 04:43:03.980501 containerd[2009]: time="2025-07-15T04:43:03.979487900Z" level=info msg="connecting to shim 9cedbed6822633fc59ddf1a44177610dc0ccc2d8ebe8799532b71869df899953" address="unix:///run/containerd/s/4ef28b0d5f56edb296b8487fbeef46a9847113657bcbfdbb81b7afc24a44d2ad" protocol=ttrpc version=3 Jul 15 04:43:03.996571 containerd[2009]: time="2025-07-15T04:43:03.996510848Z" level=info msg="CreateContainer within sandbox \"e764bf7aeb31eebdf16b1b7f318582c01198303dc3d0bdf426d21cd10c56ceb7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"92e3ff2525627331e4c9b3f1eb1509064aa841fdeaf2fe7fb8e529e3bcf48f15\"" Jul 15 04:43:03.998260 containerd[2009]: time="2025-07-15T04:43:03.997710740Z" level=info msg="StartContainer for \"92e3ff2525627331e4c9b3f1eb1509064aa841fdeaf2fe7fb8e529e3bcf48f15\"" Jul 15 04:43:04.001808 containerd[2009]: time="2025-07-15T04:43:04.001642036Z" level=info msg="connecting to shim 92e3ff2525627331e4c9b3f1eb1509064aa841fdeaf2fe7fb8e529e3bcf48f15" address="unix:///run/containerd/s/47ecd50eb916747597375683f1a863338ec534800f73bfc0ff2448bc8218cdff" protocol=ttrpc version=3 Jul 15 04:43:04.021614 systemd[1]: Started cri-containerd-9cedbed6822633fc59ddf1a44177610dc0ccc2d8ebe8799532b71869df899953.scope - libcontainer container 9cedbed6822633fc59ddf1a44177610dc0ccc2d8ebe8799532b71869df899953. Jul 15 04:43:04.063664 systemd[1]: Started cri-containerd-92e3ff2525627331e4c9b3f1eb1509064aa841fdeaf2fe7fb8e529e3bcf48f15.scope - libcontainer container 92e3ff2525627331e4c9b3f1eb1509064aa841fdeaf2fe7fb8e529e3bcf48f15. Jul 15 04:43:04.109031 containerd[2009]: time="2025-07-15T04:43:04.108975184Z" level=info msg="StartContainer for \"9cedbed6822633fc59ddf1a44177610dc0ccc2d8ebe8799532b71869df899953\" returns successfully" Jul 15 04:43:04.177215 containerd[2009]: time="2025-07-15T04:43:04.177050273Z" level=info msg="StartContainer for \"92e3ff2525627331e4c9b3f1eb1509064aa841fdeaf2fe7fb8e529e3bcf48f15\" returns successfully" Jul 15 04:43:07.698894 containerd[2009]: time="2025-07-15T04:43:07.698695822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"919dca06163ec3e00779ad1d1c29732a326aa487c8dff8cc695010f847b289ef\" id:\"a80bfe7f00e3c3f8ff13e3241f7a6a1cedc2d4400c68e43087bcb19c73ad9fe9\" pid:6898 exit_status:1 exited_at:{seconds:1752554587 nanos:697946974}" Jul 15 04:43:08.209130 systemd[1]: cri-containerd-5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333.scope: Deactivated successfully. Jul 15 04:43:08.210074 systemd[1]: cri-containerd-5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333.scope: Consumed 3.308s CPU time, 19.6M memory peak, 72K read from disk. Jul 15 04:43:08.213179 containerd[2009]: time="2025-07-15T04:43:08.213030561Z" level=info msg="received exit event container_id:\"5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333\" id:\"5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333\" pid:3190 exit_status:1 exited_at:{seconds:1752554588 nanos:212282241}" Jul 15 04:43:08.213891 containerd[2009]: time="2025-07-15T04:43:08.213368997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333\" id:\"5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333\" pid:3190 exit_status:1 exited_at:{seconds:1752554588 nanos:212282241}" Jul 15 04:43:08.254195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333-rootfs.mount: Deactivated successfully. Jul 15 04:43:08.949525 kubelet[3606]: I0715 04:43:08.949474 3606 scope.go:117] "RemoveContainer" containerID="5871e34837ce25314e75a4ae01932b508cc3664b5409489925329c7054d65333" Jul 15 04:43:08.953478 containerd[2009]: time="2025-07-15T04:43:08.953430240Z" level=info msg="CreateContainer within sandbox \"8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 15 04:43:08.971971 containerd[2009]: time="2025-07-15T04:43:08.971906677Z" level=info msg="Container 6983a5f101a6d8bf03213b7eacff6bc3dad18c61b7f11a406812f112eeb9ea7f: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:43:08.993219 containerd[2009]: time="2025-07-15T04:43:08.993143737Z" level=info msg="CreateContainer within sandbox \"8d1add6a571df310cd900038c1b191def9e08ea16b742cf4709899f141de50df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6983a5f101a6d8bf03213b7eacff6bc3dad18c61b7f11a406812f112eeb9ea7f\"" Jul 15 04:43:08.994678 containerd[2009]: time="2025-07-15T04:43:08.994277353Z" level=info msg="StartContainer for \"6983a5f101a6d8bf03213b7eacff6bc3dad18c61b7f11a406812f112eeb9ea7f\"" Jul 15 04:43:08.996386 containerd[2009]: time="2025-07-15T04:43:08.996315301Z" level=info msg="connecting to shim 6983a5f101a6d8bf03213b7eacff6bc3dad18c61b7f11a406812f112eeb9ea7f" address="unix:///run/containerd/s/ff542dabaa6f1b67face6c3aa9a4047f416066d73704c371f23d149dd6f278d9" protocol=ttrpc version=3 Jul 15 04:43:09.033573 systemd[1]: Started cri-containerd-6983a5f101a6d8bf03213b7eacff6bc3dad18c61b7f11a406812f112eeb9ea7f.scope - libcontainer container 6983a5f101a6d8bf03213b7eacff6bc3dad18c61b7f11a406812f112eeb9ea7f. Jul 15 04:43:09.112046 containerd[2009]: time="2025-07-15T04:43:09.111997965Z" level=info msg="StartContainer for \"6983a5f101a6d8bf03213b7eacff6bc3dad18c61b7f11a406812f112eeb9ea7f\" returns successfully" Jul 15 04:43:11.303165 kubelet[3606]: E0715 04:43:11.303090 3606 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-130?timeout=10s\": context deadline exceeded"