Jul 6 23:27:07.145009 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 6 23:27:07.145059 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:27:07.145085 kernel: KASLR disabled due to lack of seed Jul 6 23:27:07.145102 kernel: efi: EFI v2.7 by EDK II Jul 6 23:27:07.145119 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Jul 6 23:27:07.145136 kernel: secureboot: Secure boot disabled Jul 6 23:27:07.145193 kernel: ACPI: Early table checksum verification disabled Jul 6 23:27:07.145212 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 6 23:27:07.145229 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 6 23:27:07.145245 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 6 23:27:07.145261 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 6 23:27:07.145285 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 6 23:27:07.145301 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 6 23:27:07.145317 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 6 23:27:07.145335 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 6 23:27:07.145351 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 6 23:27:07.145372 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 6 23:27:07.145388 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 6 23:27:07.145404 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 6 23:27:07.145420 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 6 23:27:07.145436 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 6 23:27:07.145451 kernel: printk: legacy bootconsole [uart0] enabled Jul 6 23:27:07.145467 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:27:07.145483 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:27:07.145499 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Jul 6 23:27:07.145515 kernel: Zone ranges: Jul 6 23:27:07.145531 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 6 23:27:07.145552 kernel: DMA32 empty Jul 6 23:27:07.145568 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 6 23:27:07.145584 kernel: Device empty Jul 6 23:27:07.145599 kernel: Movable zone start for each node Jul 6 23:27:07.145615 kernel: Early memory node ranges Jul 6 23:27:07.145632 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 6 23:27:07.145648 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 6 23:27:07.145664 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 6 23:27:07.145680 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 6 23:27:07.145695 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 6 23:27:07.145711 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 6 23:27:07.145727 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 6 23:27:07.145750 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 6 23:27:07.145774 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 6 23:27:07.145791 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 6 23:27:07.145807 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jul 6 23:27:07.145824 kernel: psci: probing for conduit method from ACPI. Jul 6 23:27:07.145845 kernel: psci: PSCIv1.0 detected in firmware. Jul 6 23:27:07.145862 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:27:07.145878 kernel: psci: Trusted OS migration not required Jul 6 23:27:07.145895 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:27:07.145912 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 6 23:27:07.145928 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:27:07.145945 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:27:07.145962 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:27:07.145979 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:27:07.145996 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:27:07.146013 kernel: CPU features: detected: Spectre-v2 Jul 6 23:27:07.146034 kernel: CPU features: detected: Spectre-v3a Jul 6 23:27:07.146051 kernel: CPU features: detected: Spectre-BHB Jul 6 23:27:07.146067 kernel: CPU features: detected: ARM erratum 1742098 Jul 6 23:27:07.146084 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 6 23:27:07.146101 kernel: alternatives: applying boot alternatives Jul 6 23:27:07.146120 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:27:07.146138 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:27:07.148266 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:27:07.148291 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:27:07.148309 kernel: Fallback order for Node 0: 0 Jul 6 23:27:07.148340 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 6 23:27:07.148359 kernel: Policy zone: Normal Jul 6 23:27:07.148377 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:27:07.148394 kernel: software IO TLB: area num 2. Jul 6 23:27:07.148412 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jul 6 23:27:07.148429 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:27:07.148445 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:27:07.148464 kernel: rcu: RCU event tracing is enabled. Jul 6 23:27:07.148482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:27:07.148499 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:27:07.148516 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:27:07.148533 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:27:07.148556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:27:07.148574 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:27:07.148591 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:27:07.148607 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:27:07.148624 kernel: GICv3: 96 SPIs implemented Jul 6 23:27:07.148640 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:27:07.148657 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:27:07.148673 kernel: GICv3: GICv3 features: 16 PPIs Jul 6 23:27:07.148690 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:27:07.148707 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 6 23:27:07.148725 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 6 23:27:07.148742 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:27:07.148767 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:27:07.148784 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 6 23:27:07.148801 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 6 23:27:07.148818 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 6 23:27:07.148834 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:27:07.148851 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 6 23:27:07.148868 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 6 23:27:07.148885 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 6 23:27:07.148902 kernel: Console: colour dummy device 80x25 Jul 6 23:27:07.148919 kernel: printk: legacy console [tty1] enabled Jul 6 23:27:07.148937 kernel: ACPI: Core revision 20240827 Jul 6 23:27:07.148959 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 6 23:27:07.148976 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:27:07.148994 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:27:07.149011 kernel: landlock: Up and running. Jul 6 23:27:07.149028 kernel: SELinux: Initializing. Jul 6 23:27:07.149045 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.149063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.149080 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:27:07.149097 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:27:07.149120 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:27:07.149137 kernel: Remapping and enabling EFI services. Jul 6 23:27:07.151236 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:27:07.151282 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:27:07.151300 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 6 23:27:07.151318 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 6 23:27:07.151336 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 6 23:27:07.151353 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:27:07.151370 kernel: SMP: Total of 2 processors activated. Jul 6 23:27:07.151409 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:27:07.151428 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:27:07.151450 kernel: CPU features: detected: 32-bit EL1 Support Jul 6 23:27:07.151469 kernel: CPU features: detected: CRC32 instructions Jul 6 23:27:07.151487 kernel: alternatives: applying system-wide alternatives Jul 6 23:27:07.151505 kernel: Memory: 3796516K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 212600K reserved, 16384K cma-reserved) Jul 6 23:27:07.151523 kernel: devtmpfs: initialized Jul 6 23:27:07.151545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:27:07.151567 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:27:07.151585 kernel: 16912 pages in range for non-PLT usage Jul 6 23:27:07.151603 kernel: 508432 pages in range for PLT usage Jul 6 23:27:07.151621 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:27:07.151638 kernel: SMBIOS 3.0.0 present. Jul 6 23:27:07.151656 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 6 23:27:07.151673 kernel: DMI: Memory slots populated: 0/0 Jul 6 23:27:07.151691 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:27:07.151714 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:27:07.151734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:27:07.151752 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:27:07.151769 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:27:07.151787 kernel: audit: type=2000 audit(0.230:1): state=initialized audit_enabled=0 res=1 Jul 6 23:27:07.151805 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:27:07.151823 kernel: cpuidle: using governor menu Jul 6 23:27:07.151840 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:27:07.151858 kernel: ASID allocator initialised with 65536 entries Jul 6 23:27:07.151882 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:27:07.151901 kernel: Serial: AMBA PL011 UART driver Jul 6 23:27:07.151919 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:27:07.151937 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:27:07.151955 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:27:07.151973 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:27:07.151990 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:27:07.152009 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:27:07.152027 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:27:07.152050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:27:07.152069 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:27:07.152086 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:27:07.152103 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:27:07.152122 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:27:07.152139 kernel: ACPI: Interpreter enabled Jul 6 23:27:07.152224 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:27:07.152244 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:27:07.152262 kernel: ACPI: CPU0 has been hot-added Jul 6 23:27:07.152289 kernel: ACPI: CPU1 has been hot-added Jul 6 23:27:07.152307 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 6 23:27:07.152662 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:27:07.152914 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:27:07.153133 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:27:07.155498 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 6 23:27:07.155719 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 6 23:27:07.155761 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 6 23:27:07.155781 kernel: acpiphp: Slot [1] registered Jul 6 23:27:07.155799 kernel: acpiphp: Slot [2] registered Jul 6 23:27:07.155817 kernel: acpiphp: Slot [3] registered Jul 6 23:27:07.155836 kernel: acpiphp: Slot [4] registered Jul 6 23:27:07.155854 kernel: acpiphp: Slot [5] registered Jul 6 23:27:07.155871 kernel: acpiphp: Slot [6] registered Jul 6 23:27:07.155890 kernel: acpiphp: Slot [7] registered Jul 6 23:27:07.155908 kernel: acpiphp: Slot [8] registered Jul 6 23:27:07.155926 kernel: acpiphp: Slot [9] registered Jul 6 23:27:07.155949 kernel: acpiphp: Slot [10] registered Jul 6 23:27:07.155966 kernel: acpiphp: Slot [11] registered Jul 6 23:27:07.155984 kernel: acpiphp: Slot [12] registered Jul 6 23:27:07.156001 kernel: acpiphp: Slot [13] registered Jul 6 23:27:07.156019 kernel: acpiphp: Slot [14] registered Jul 6 23:27:07.156036 kernel: acpiphp: Slot [15] registered Jul 6 23:27:07.156054 kernel: acpiphp: Slot [16] registered Jul 6 23:27:07.156072 kernel: acpiphp: Slot [17] registered Jul 6 23:27:07.156090 kernel: acpiphp: Slot [18] registered Jul 6 23:27:07.156112 kernel: acpiphp: Slot [19] registered Jul 6 23:27:07.156131 kernel: acpiphp: Slot [20] registered Jul 6 23:27:07.156198 kernel: acpiphp: Slot [21] registered Jul 6 23:27:07.156221 kernel: acpiphp: Slot [22] registered Jul 6 23:27:07.156239 kernel: acpiphp: Slot [23] registered Jul 6 23:27:07.156258 kernel: acpiphp: Slot [24] registered Jul 6 23:27:07.156276 kernel: acpiphp: Slot [25] registered Jul 6 23:27:07.156294 kernel: acpiphp: Slot [26] registered Jul 6 23:27:07.156313 kernel: acpiphp: Slot [27] registered Jul 6 23:27:07.156330 kernel: acpiphp: Slot [28] registered Jul 6 23:27:07.156357 kernel: acpiphp: Slot [29] registered Jul 6 23:27:07.156375 kernel: acpiphp: Slot [30] registered Jul 6 23:27:07.156392 kernel: acpiphp: Slot [31] registered Jul 6 23:27:07.156410 kernel: PCI host bridge to bus 0000:00 Jul 6 23:27:07.156665 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 6 23:27:07.156864 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:27:07.157058 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 6 23:27:07.157306 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 6 23:27:07.157572 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:27:07.157831 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 6 23:27:07.158049 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 6 23:27:07.159016 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 6 23:27:07.159330 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 6 23:27:07.159540 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:27:07.159766 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 6 23:27:07.159975 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 6 23:27:07.162275 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 6 23:27:07.162576 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 6 23:27:07.162792 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:27:07.162997 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 6 23:27:07.163247 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 6 23:27:07.163518 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 6 23:27:07.163771 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 6 23:27:07.164019 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 6 23:27:07.164365 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 6 23:27:07.164576 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:27:07.164768 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 6 23:27:07.164810 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:27:07.164830 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:27:07.164850 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:27:07.164869 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:27:07.164887 kernel: iommu: Default domain type: Translated Jul 6 23:27:07.164905 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:27:07.164924 kernel: efivars: Registered efivars operations Jul 6 23:27:07.164942 kernel: vgaarb: loaded Jul 6 23:27:07.164960 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:27:07.164978 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:27:07.165004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:27:07.165023 kernel: pnp: PnP ACPI init Jul 6 23:27:07.166104 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 6 23:27:07.168327 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:27:07.168360 kernel: NET: Registered PF_INET protocol family Jul 6 23:27:07.168379 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:27:07.168399 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:27:07.168418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:27:07.168448 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:27:07.168467 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:27:07.168485 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:27:07.168503 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.168522 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.168540 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:27:07.168558 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:27:07.168578 kernel: kvm [1]: HYP mode not available Jul 6 23:27:07.168596 kernel: Initialise system trusted keyrings Jul 6 23:27:07.168620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:27:07.168638 kernel: Key type asymmetric registered Jul 6 23:27:07.168656 kernel: Asymmetric key parser 'x509' registered Jul 6 23:27:07.168674 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:27:07.168692 kernel: io scheduler mq-deadline registered Jul 6 23:27:07.168712 kernel: io scheduler kyber registered Jul 6 23:27:07.168731 kernel: io scheduler bfq registered Jul 6 23:27:07.169051 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 6 23:27:07.169102 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:27:07.169123 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:27:07.169191 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 6 23:27:07.169219 kernel: ACPI: button: Sleep Button [SLPB] Jul 6 23:27:07.169239 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:27:07.169260 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 6 23:27:07.169516 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 6 23:27:07.169549 kernel: printk: legacy console [ttyS0] disabled Jul 6 23:27:07.169568 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 6 23:27:07.169597 kernel: printk: legacy console [ttyS0] enabled Jul 6 23:27:07.169616 kernel: printk: legacy bootconsole [uart0] disabled Jul 6 23:27:07.169634 kernel: thunder_xcv, ver 1.0 Jul 6 23:27:07.169653 kernel: thunder_bgx, ver 1.0 Jul 6 23:27:07.169671 kernel: nicpf, ver 1.0 Jul 6 23:27:07.169689 kernel: nicvf, ver 1.0 Jul 6 23:27:07.169941 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:27:07.173255 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:27:06 UTC (1751844426) Jul 6 23:27:07.173318 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:27:07.173339 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 6 23:27:07.173361 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:27:07.173380 kernel: watchdog: NMI not fully supported Jul 6 23:27:07.173398 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:27:07.173416 kernel: Segment Routing with IPv6 Jul 6 23:27:07.173434 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:27:07.173453 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:27:07.173471 kernel: Key type dns_resolver registered Jul 6 23:27:07.173495 kernel: registered taskstats version 1 Jul 6 23:27:07.173513 kernel: Loading compiled-in X.509 certificates Jul 6 23:27:07.173531 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:27:07.173549 kernel: Demotion targets for Node 0: null Jul 6 23:27:07.173567 kernel: Key type .fscrypt registered Jul 6 23:27:07.173585 kernel: Key type fscrypt-provisioning registered Jul 6 23:27:07.173602 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:27:07.173620 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:27:07.173638 kernel: ima: No architecture policies found Jul 6 23:27:07.173661 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:27:07.173679 kernel: clk: Disabling unused clocks Jul 6 23:27:07.173697 kernel: PM: genpd: Disabling unused power domains Jul 6 23:27:07.173715 kernel: Warning: unable to open an initial console. Jul 6 23:27:07.173735 kernel: Freeing unused kernel memory: 39488K Jul 6 23:27:07.173753 kernel: Run /init as init process Jul 6 23:27:07.173771 kernel: with arguments: Jul 6 23:27:07.173789 kernel: /init Jul 6 23:27:07.173806 kernel: with environment: Jul 6 23:27:07.173823 kernel: HOME=/ Jul 6 23:27:07.173846 kernel: TERM=linux Jul 6 23:27:07.173864 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:27:07.173884 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:27:07.173909 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:07.173930 systemd[1]: Detected virtualization amazon. Jul 6 23:27:07.173949 systemd[1]: Detected architecture arm64. Jul 6 23:27:07.173967 systemd[1]: Running in initrd. Jul 6 23:27:07.173992 systemd[1]: No hostname configured, using default hostname. Jul 6 23:27:07.174013 systemd[1]: Hostname set to . Jul 6 23:27:07.174032 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:07.174052 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:27:07.174071 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:07.174091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:07.174113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:27:07.174133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:07.174201 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:27:07.174226 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:27:07.174249 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:27:07.174269 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:27:07.174289 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:07.174309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:07.174330 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:07.174388 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:07.174409 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:07.174429 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:07.174450 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:07.174471 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:07.174493 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:27:07.174513 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:27:07.174533 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:07.174560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:07.174581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:07.174600 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:07.174620 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:27:07.174640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:07.174659 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:27:07.174680 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:27:07.174699 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:27:07.174720 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:07.174747 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:07.174768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:07.174788 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:07.174811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:07.174904 systemd-journald[257]: Collecting audit messages is disabled. Jul 6 23:27:07.174961 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:27:07.174982 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:27:07.175004 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:27:07.175034 kernel: Bridge firewalling registered Jul 6 23:27:07.175079 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:07.175102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:07.175127 systemd-journald[257]: Journal started Jul 6 23:27:07.176333 systemd-journald[257]: Runtime Journal (/run/log/journal/ec2863995cb8fb15b6d7e04722da7cac) is 8M, max 75.3M, 67.3M free. Jul 6 23:27:07.111751 systemd-modules-load[259]: Inserted module 'overlay' Jul 6 23:27:07.152210 systemd-modules-load[259]: Inserted module 'br_netfilter' Jul 6 23:27:07.187184 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:07.197387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:07.213505 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:27:07.222878 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:07.234370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:07.247065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:07.254197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:07.282260 systemd-tmpfiles[279]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:27:07.288543 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:07.301591 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:07.306249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:07.334697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:07.343453 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:27:07.387116 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:27:07.430001 systemd-resolved[295]: Positive Trust Anchors: Jul 6 23:27:07.430041 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:07.430108 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:07.571186 kernel: SCSI subsystem initialized Jul 6 23:27:07.579186 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:27:07.592197 kernel: iscsi: registered transport (tcp) Jul 6 23:27:07.615196 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:27:07.615275 kernel: QLogic iSCSI HBA Driver Jul 6 23:27:07.649516 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:27:07.678967 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:07.688457 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:27:07.707428 kernel: random: crng init done Jul 6 23:27:07.707948 systemd-resolved[295]: Defaulting to hostname 'linux'. Jul 6 23:27:07.712102 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:07.716860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:07.782644 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:07.788706 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:27:07.891200 kernel: raid6: neonx8 gen() 6536 MB/s Jul 6 23:27:07.909203 kernel: raid6: neonx4 gen() 6534 MB/s Jul 6 23:27:07.926200 kernel: raid6: neonx2 gen() 5419 MB/s Jul 6 23:27:07.943201 kernel: raid6: neonx1 gen() 3950 MB/s Jul 6 23:27:07.960207 kernel: raid6: int64x8 gen() 3647 MB/s Jul 6 23:27:07.977210 kernel: raid6: int64x4 gen() 3697 MB/s Jul 6 23:27:07.995212 kernel: raid6: int64x2 gen() 3569 MB/s Jul 6 23:27:08.013400 kernel: raid6: int64x1 gen() 2740 MB/s Jul 6 23:27:08.013471 kernel: raid6: using algorithm neonx8 gen() 6536 MB/s Jul 6 23:27:08.032377 kernel: raid6: .... xor() 4578 MB/s, rmw enabled Jul 6 23:27:08.032451 kernel: raid6: using neon recovery algorithm Jul 6 23:27:08.041815 kernel: xor: measuring software checksum speed Jul 6 23:27:08.041895 kernel: 8regs : 12960 MB/sec Jul 6 23:27:08.043263 kernel: 32regs : 13020 MB/sec Jul 6 23:27:08.044697 kernel: arm64_neon : 7933 MB/sec Jul 6 23:27:08.044754 kernel: xor: using function: 32regs (13020 MB/sec) Jul 6 23:27:08.141202 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:27:08.154245 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:08.160897 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:08.213353 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jul 6 23:27:08.226205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:08.240658 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:27:08.300882 dracut-pre-trigger[515]: rd.md=0: removing MD RAID activation Jul 6 23:27:08.352688 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:08.359881 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:08.519287 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:08.528511 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:27:08.725123 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:27:08.725245 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 6 23:27:08.725594 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 6 23:27:08.728194 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 6 23:27:08.728546 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 6 23:27:08.734564 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 6 23:27:08.744196 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 6 23:27:08.745959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:08.746242 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:08.757536 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:7b:9f:2c:95:bf Jul 6 23:27:08.753556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:08.766919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:08.775624 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:27:08.775680 kernel: GPT:9289727 != 16777215 Jul 6 23:27:08.775706 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:27:08.775730 kernel: GPT:9289727 != 16777215 Jul 6 23:27:08.775753 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:27:08.780087 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:08.783448 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:08.792509 (udev-worker)[551]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:08.826264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:08.848257 kernel: nvme nvme0: using unchecked data buffer Jul 6 23:27:08.978236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:27:09.036592 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 6 23:27:09.061256 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:09.108895 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 6 23:27:09.132695 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 6 23:27:09.137403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 6 23:27:09.145759 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:09.148779 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:09.156678 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:09.162797 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:27:09.168580 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:27:09.198046 disk-uuid[686]: Primary Header is updated. Jul 6 23:27:09.198046 disk-uuid[686]: Secondary Entries is updated. Jul 6 23:27:09.198046 disk-uuid[686]: Secondary Header is updated. Jul 6 23:27:09.215536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:09.214109 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:10.241199 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:27:10.244235 disk-uuid[690]: The operation has completed successfully. Jul 6 23:27:10.431203 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:27:10.431439 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:27:10.523028 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:27:10.561304 sh[954]: Success Jul 6 23:27:10.591630 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:27:10.591712 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:27:10.594306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:27:10.608220 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:27:10.720985 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:27:10.730131 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:27:10.756683 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:27:10.778219 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:27:10.781206 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (977) Jul 6 23:27:10.785950 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:27:10.787328 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:10.787357 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:27:10.897801 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:27:10.902587 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:27:10.908403 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:27:10.914385 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:27:10.923384 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:27:10.975195 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1010) Jul 6 23:27:10.981192 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:10.981266 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:10.982887 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:11.003254 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:11.005364 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:27:11.017398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:27:11.130123 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:11.142365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:11.224518 systemd-networkd[1148]: lo: Link UP Jul 6 23:27:11.225027 systemd-networkd[1148]: lo: Gained carrier Jul 6 23:27:11.228606 systemd-networkd[1148]: Enumeration completed Jul 6 23:27:11.229324 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:11.230360 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:11.230369 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:11.246353 systemd[1]: Reached target network.target - Network. Jul 6 23:27:11.252825 systemd-networkd[1148]: eth0: Link UP Jul 6 23:27:11.252840 systemd-networkd[1148]: eth0: Gained carrier Jul 6 23:27:11.252861 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:11.270264 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.19.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:27:11.536226 ignition[1065]: Ignition 2.21.0 Jul 6 23:27:11.536251 ignition[1065]: Stage: fetch-offline Jul 6 23:27:11.537232 ignition[1065]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:11.537258 ignition[1065]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:11.547422 ignition[1065]: Ignition finished successfully Jul 6 23:27:11.553216 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:11.561903 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:27:11.598082 ignition[1159]: Ignition 2.21.0 Jul 6 23:27:11.598116 ignition[1159]: Stage: fetch Jul 6 23:27:11.598767 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:11.598794 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:11.599095 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:11.614473 ignition[1159]: PUT result: OK Jul 6 23:27:11.618692 ignition[1159]: parsed url from cmdline: "" Jul 6 23:27:11.618868 ignition[1159]: no config URL provided Jul 6 23:27:11.620598 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:27:11.623071 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:27:11.625362 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:11.628494 ignition[1159]: PUT result: OK Jul 6 23:27:11.628631 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 6 23:27:11.632804 ignition[1159]: GET result: OK Jul 6 23:27:11.633035 ignition[1159]: parsing config with SHA512: 54b9f1e41601db55ec9abc3a4adab8f15f1b1cac0f510f28410d392c01d628c3ad94c1a8bb5b8021b5abc7d1f02ac26f7e4850d4d7e75bcf17524b2cd93d7082 Jul 6 23:27:11.649094 unknown[1159]: fetched base config from "system" Jul 6 23:27:11.649136 unknown[1159]: fetched base config from "system" Jul 6 23:27:11.649835 ignition[1159]: fetch: fetch complete Jul 6 23:27:11.649175 unknown[1159]: fetched user config from "aws" Jul 6 23:27:11.649849 ignition[1159]: fetch: fetch passed Jul 6 23:27:11.649947 ignition[1159]: Ignition finished successfully Jul 6 23:27:11.662881 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:27:11.670678 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:27:11.721704 ignition[1166]: Ignition 2.21.0 Jul 6 23:27:11.721738 ignition[1166]: Stage: kargs Jul 6 23:27:11.722419 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:11.722447 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:11.722615 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:11.728350 ignition[1166]: PUT result: OK Jul 6 23:27:11.741065 ignition[1166]: kargs: kargs passed Jul 6 23:27:11.741265 ignition[1166]: Ignition finished successfully Jul 6 23:27:11.746944 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:27:11.752451 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:27:11.802777 ignition[1172]: Ignition 2.21.0 Jul 6 23:27:11.803414 ignition[1172]: Stage: disks Jul 6 23:27:11.804433 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:11.804818 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:11.805004 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:11.815965 ignition[1172]: PUT result: OK Jul 6 23:27:11.822116 ignition[1172]: disks: disks passed Jul 6 23:27:11.822520 ignition[1172]: Ignition finished successfully Jul 6 23:27:11.828413 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:27:11.833360 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:11.836638 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:27:11.842778 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:11.846037 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:11.853761 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:11.858492 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:27:11.912378 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:27:11.916729 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:27:11.926562 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:27:12.078192 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:27:12.079985 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:27:12.084370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:12.094620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:12.101326 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:27:12.102026 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:27:12.102112 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:27:12.105893 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:12.132752 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:27:12.138060 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:27:12.162265 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1199) Jul 6 23:27:12.167039 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:12.167101 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:12.167126 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:12.175762 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:12.567591 initrd-setup-root[1223]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:27:12.597680 initrd-setup-root[1230]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:27:12.606647 initrd-setup-root[1237]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:27:12.616238 initrd-setup-root[1244]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:27:12.780384 systemd-networkd[1148]: eth0: Gained IPv6LL Jul 6 23:27:12.929844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:12.937996 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:27:12.948089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:27:12.967999 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:27:12.971263 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:13.010228 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:27:13.022660 ignition[1312]: INFO : Ignition 2.21.0 Jul 6 23:27:13.022660 ignition[1312]: INFO : Stage: mount Jul 6 23:27:13.026375 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:13.026375 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:13.031544 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:13.034732 ignition[1312]: INFO : PUT result: OK Jul 6 23:27:13.039857 ignition[1312]: INFO : mount: mount passed Jul 6 23:27:13.041992 ignition[1312]: INFO : Ignition finished successfully Jul 6 23:27:13.045704 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:27:13.052142 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:27:13.083727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:13.130191 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1324) Jul 6 23:27:13.135538 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:27:13.135609 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:27:13.136895 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 6 23:27:13.145729 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:13.201231 ignition[1341]: INFO : Ignition 2.21.0 Jul 6 23:27:13.201231 ignition[1341]: INFO : Stage: files Jul 6 23:27:13.207754 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:13.207754 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:13.207754 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:13.215681 ignition[1341]: INFO : PUT result: OK Jul 6 23:27:13.219501 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:27:13.229017 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:27:13.229017 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:27:13.277051 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:27:13.280272 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:27:13.283628 unknown[1341]: wrote ssh authorized keys file for user: core Jul 6 23:27:13.286725 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:27:13.291210 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:27:13.295912 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:27:13.379016 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:27:13.535165 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:27:13.535165 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:13.544624 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:13.544624 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:13.544624 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:13.544624 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:13.544624 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:13.544624 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:13.544624 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:13.571933 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:13.571933 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:13.571933 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:27:13.571933 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:27:13.571933 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:27:13.571933 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:27:14.276221 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:27:14.678621 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:27:14.678621 ignition[1341]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:27:14.690517 ignition[1341]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:14.698339 ignition[1341]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:14.698339 ignition[1341]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:27:14.698339 ignition[1341]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:14.711203 ignition[1341]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:14.711203 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:14.711203 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:14.711203 ignition[1341]: INFO : files: files passed Jul 6 23:27:14.711203 ignition[1341]: INFO : Ignition finished successfully Jul 6 23:27:14.714972 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:27:14.732384 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:27:14.737499 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:27:14.767205 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:27:14.770252 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:27:14.789947 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:14.789947 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:14.798912 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:14.805840 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:14.812766 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:27:14.819248 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:27:14.924124 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:27:14.925627 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:27:14.931892 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:27:14.934529 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:27:14.939773 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:27:14.943271 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:27:15.005096 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:15.013707 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:27:15.052561 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:15.053005 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:15.061525 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:27:15.064756 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:27:15.065030 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:15.077626 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:27:15.082249 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:27:15.086855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:27:15.092342 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:15.095409 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:15.100131 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:27:15.108702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:27:15.111498 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:15.120010 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:27:15.123965 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:27:15.129201 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:27:15.135335 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:27:15.135827 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:15.143231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:15.146894 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:15.154735 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:27:15.157001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:15.160690 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:27:15.161341 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:15.174314 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:27:15.174876 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:15.184008 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:27:15.184849 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:27:15.191053 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:27:15.202504 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:27:15.204717 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:27:15.205012 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:15.210923 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:27:15.211180 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:15.245003 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:27:15.248601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:27:15.251319 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:27:15.262725 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:27:15.269242 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:27:15.283132 ignition[1395]: INFO : Ignition 2.21.0 Jul 6 23:27:15.283132 ignition[1395]: INFO : Stage: umount Jul 6 23:27:15.287432 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:15.287432 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 6 23:27:15.287432 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 6 23:27:15.287432 ignition[1395]: INFO : PUT result: OK Jul 6 23:27:15.310415 ignition[1395]: INFO : umount: umount passed Jul 6 23:27:15.314705 ignition[1395]: INFO : Ignition finished successfully Jul 6 23:27:15.319896 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:27:15.323309 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:27:15.328599 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:27:15.328727 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:27:15.331687 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:27:15.331801 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:27:15.334672 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:27:15.334777 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:27:15.339261 systemd[1]: Stopped target network.target - Network. Jul 6 23:27:15.343351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:27:15.343655 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:15.352956 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:27:15.357905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:27:15.361390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:15.364485 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:27:15.367027 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:27:15.379128 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:27:15.379266 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:15.387096 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:27:15.387667 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:15.392464 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:27:15.392583 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:27:15.395719 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:27:15.395814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:15.403135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:27:15.403377 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:15.408031 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:27:15.411062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:27:15.432581 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:27:15.432825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:27:15.455141 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:27:15.455593 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:27:15.455783 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:27:15.469266 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:27:15.470445 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:27:15.476007 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:27:15.476092 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:15.486381 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:27:15.494470 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:27:15.499784 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:15.518580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:27:15.518735 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:15.528201 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:27:15.528484 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:15.536649 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:27:15.537483 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:15.546022 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:15.554703 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:27:15.554868 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:15.582716 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:27:15.585050 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:15.592508 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:27:15.592621 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:15.601058 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:27:15.601203 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:15.603763 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:27:15.603887 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:15.612601 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:27:15.612747 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:15.614442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:27:15.614557 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:15.629200 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:27:15.638080 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:27:15.638532 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:15.650889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:27:15.651262 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:15.661487 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:27:15.661611 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:15.667653 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:27:15.667764 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:15.681803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:15.681917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:15.695405 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:27:15.695731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 6 23:27:15.695821 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:27:15.695913 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:15.697133 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:27:15.697488 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:27:15.705555 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:27:15.705893 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:27:15.716729 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:27:15.725788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:27:15.771422 systemd[1]: Switching root. Jul 6 23:27:15.811275 systemd-journald[257]: Journal stopped Jul 6 23:27:18.411558 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Jul 6 23:27:18.411693 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:27:18.411753 kernel: SELinux: policy capability open_perms=1 Jul 6 23:27:18.411785 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:27:18.411815 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:27:18.411846 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:27:18.411875 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:27:18.411906 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:27:18.411935 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:27:18.411962 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:27:18.411990 kernel: audit: type=1403 audit(1751844436.277:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:27:18.412031 systemd[1]: Successfully loaded SELinux policy in 98.111ms. Jul 6 23:27:18.412080 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.263ms. Jul 6 23:27:18.412112 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:18.413430 systemd[1]: Detected virtualization amazon. Jul 6 23:27:18.413503 systemd[1]: Detected architecture arm64. Jul 6 23:27:18.413539 systemd[1]: Detected first boot. Jul 6 23:27:18.413571 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:18.413604 zram_generator::config[1438]: No configuration found. Jul 6 23:27:18.413646 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:27:18.413675 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:27:18.413708 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:27:18.413742 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:27:18.413773 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:27:18.413804 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:18.413838 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:27:18.413868 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:27:18.413901 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:27:18.413936 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:27:18.413968 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:27:18.413996 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:27:18.414028 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:27:18.414059 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:27:18.414091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:18.414121 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:18.414196 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:27:18.414240 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:27:18.414274 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:27:18.414332 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:18.414364 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:27:18.414397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:18.414428 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:18.414460 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:27:18.414491 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:27:18.414528 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:18.414556 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:27:18.414585 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:18.414615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:18.414646 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:18.414677 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:18.414705 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:27:18.414735 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:27:18.414765 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:27:18.414802 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:18.414833 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:18.414864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:18.414893 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:27:18.414922 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:27:18.414950 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:27:18.414981 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:27:18.415010 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:27:18.415039 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:27:18.415073 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:27:18.415103 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:27:18.415132 systemd[1]: Reached target machines.target - Containers. Jul 6 23:27:18.424287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:27:18.424340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:18.424371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:18.424404 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:27:18.424433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:18.424465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:18.424504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:18.424534 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:27:18.424567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:18.424596 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:27:18.424626 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:27:18.424654 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:27:18.424683 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:27:18.424713 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:27:18.424752 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:18.424796 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:18.424825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:18.424854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:27:18.424884 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:27:18.424918 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:27:18.424948 kernel: fuse: init (API version 7.41) Jul 6 23:27:18.424984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:18.425023 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:27:18.425054 systemd[1]: Stopped verity-setup.service. Jul 6 23:27:18.425089 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:27:18.425118 kernel: loop: module loaded Jul 6 23:27:18.425200 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:27:18.425244 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:27:18.425275 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:27:18.425305 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:27:18.425333 kernel: ACPI: bus type drm_connector registered Jul 6 23:27:18.425362 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:27:18.425390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:18.425429 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:27:18.425459 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:27:18.425488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:18.425522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:18.425551 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:18.425579 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:18.425609 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:27:18.425642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:18.425670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:18.425703 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:27:18.425734 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:27:18.425767 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:18.425796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:18.425825 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:18.425855 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:27:18.425885 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:27:18.425915 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:27:18.425948 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:27:18.425984 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:18.426014 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:27:18.426100 systemd-journald[1521]: Collecting audit messages is disabled. Jul 6 23:27:18.426553 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:27:18.426610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:18.426644 systemd-journald[1521]: Journal started Jul 6 23:27:18.426694 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec2863995cb8fb15b6d7e04722da7cac) is 8M, max 75.3M, 67.3M free. Jul 6 23:27:17.618394 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:27:17.643226 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:27:18.436475 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:27:17.644200 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:27:18.445328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:18.457582 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:27:18.457658 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:18.471354 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:18.486202 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:27:18.504591 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:27:18.505590 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:18.509761 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:18.513446 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:27:18.516728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:27:18.519985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:27:18.563770 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:27:18.592604 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:27:18.596441 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:27:18.604450 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:27:18.614720 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:27:18.666238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:18.671575 kernel: loop0: detected capacity change from 0 to 138376 Jul 6 23:27:18.694580 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec2863995cb8fb15b6d7e04722da7cac is 109.204ms for 935 entries. Jul 6 23:27:18.694580 systemd-journald[1521]: System Journal (/var/log/journal/ec2863995cb8fb15b6d7e04722da7cac) is 8M, max 195.6M, 187.6M free. Jul 6 23:27:18.814026 systemd-journald[1521]: Received client request to flush runtime journal. Jul 6 23:27:18.814247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:27:18.717857 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Jul 6 23:27:18.717882 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Jul 6 23:27:18.744765 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:27:18.748300 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:27:18.759367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:18.770909 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:18.781539 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:27:18.821929 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:27:18.858232 kernel: loop1: detected capacity change from 0 to 107312 Jul 6 23:27:18.908278 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:27:18.917526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:18.979225 kernel: loop2: detected capacity change from 0 to 211168 Jul 6 23:27:18.982938 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Jul 6 23:27:18.982988 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Jul 6 23:27:18.993029 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:19.137218 kernel: loop3: detected capacity change from 0 to 61240 Jul 6 23:27:19.191893 kernel: loop4: detected capacity change from 0 to 138376 Jul 6 23:27:19.210202 kernel: loop5: detected capacity change from 0 to 107312 Jul 6 23:27:19.226229 kernel: loop6: detected capacity change from 0 to 211168 Jul 6 23:27:19.260181 kernel: loop7: detected capacity change from 0 to 61240 Jul 6 23:27:19.281999 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 6 23:27:19.283807 (sd-merge)[1600]: Merged extensions into '/usr'. Jul 6 23:27:19.294378 systemd[1]: Reload requested from client PID 1553 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:27:19.294418 systemd[1]: Reloading... Jul 6 23:27:19.508466 zram_generator::config[1629]: No configuration found. Jul 6 23:27:19.808007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:20.077834 systemd[1]: Reloading finished in 782 ms. Jul 6 23:27:20.083462 ldconfig[1549]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:27:20.099321 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:27:20.105265 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:27:20.110236 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:27:20.134892 systemd[1]: Starting ensure-sysext.service... Jul 6 23:27:20.140502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:20.150308 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:20.177950 systemd[1]: Reload requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:27:20.177976 systemd[1]: Reloading... Jul 6 23:27:20.251852 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:27:20.251962 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:27:20.252662 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:27:20.253331 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:27:20.257454 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:27:20.259241 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Jul 6 23:27:20.259479 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Jul 6 23:27:20.263200 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Jul 6 23:27:20.274523 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:20.274556 systemd-tmpfiles[1680]: Skipping /boot Jul 6 23:27:20.329500 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:20.329538 systemd-tmpfiles[1680]: Skipping /boot Jul 6 23:27:20.393216 zram_generator::config[1715]: No configuration found. Jul 6 23:27:20.713123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:20.864223 (udev-worker)[1765]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:20.988437 systemd[1]: Reloading finished in 809 ms. Jul 6 23:27:21.020459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:21.051289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:21.077939 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:27:21.089367 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:21.100529 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:27:21.107750 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:27:21.117017 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:21.158450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:21.165221 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:27:21.180458 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:27:21.188340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:21.192339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:21.197892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:21.206803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:21.209472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:21.210069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:21.215990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:21.216728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:21.216979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:21.225831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:21.229601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:21.232565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:21.232872 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:21.233278 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:27:21.246997 systemd[1]: Finished ensure-sysext.service. Jul 6 23:27:21.311314 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:27:21.320088 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:27:21.372979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:21.376178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:21.392517 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:21.392973 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:21.396945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:21.407338 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:27:21.413884 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:21.415328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:21.420381 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:27:21.434502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:21.435867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:21.440012 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:27:21.443437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:21.489191 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:27:21.550184 augenrules[1901]: No rules Jul 6 23:27:21.554744 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:21.559264 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:21.624744 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:27:21.896601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:22.000445 systemd-networkd[1805]: lo: Link UP Jul 6 23:27:22.000467 systemd-networkd[1805]: lo: Gained carrier Jul 6 23:27:22.004599 systemd-networkd[1805]: Enumeration completed Jul 6 23:27:22.005781 systemd-networkd[1805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:22.005808 systemd-networkd[1805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:22.012606 systemd-networkd[1805]: eth0: Link UP Jul 6 23:27:22.012972 systemd-networkd[1805]: eth0: Gained carrier Jul 6 23:27:22.013010 systemd-networkd[1805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:22.016012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 6 23:27:22.020448 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:22.027075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:27:22.032281 systemd-networkd[1805]: eth0: DHCPv4 address 172.31.19.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 6 23:27:22.035684 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:27:22.043714 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:27:22.107300 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:27:22.153341 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:27:22.216304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:22.238604 systemd-resolved[1806]: Positive Trust Anchors: Jul 6 23:27:22.238652 systemd-resolved[1806]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:22.238719 systemd-resolved[1806]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:22.255951 systemd-resolved[1806]: Defaulting to hostname 'linux'. Jul 6 23:27:22.260329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:22.263081 systemd[1]: Reached target network.target - Network. Jul 6 23:27:22.265343 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:22.268341 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:22.271273 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:27:22.274234 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:27:22.277768 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:27:22.280582 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:27:22.283630 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:27:22.286724 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:27:22.286796 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:22.288972 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:22.293115 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:27:22.299417 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:27:22.307817 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:27:22.311462 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:27:22.314794 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:27:22.327550 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:27:22.330723 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:27:22.334788 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:27:22.337912 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:22.340349 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:22.342705 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:22.342924 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:22.345315 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:27:22.352461 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:27:22.362736 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:27:22.372060 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:27:22.381512 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:27:22.392485 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:27:22.394930 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:27:22.399591 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:27:22.409226 systemd[1]: Started ntpd.service - Network Time Service. Jul 6 23:27:22.422478 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:27:22.433519 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 6 23:27:22.440130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:27:22.451481 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:27:22.471652 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:27:22.480754 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:27:22.485816 jq[1967]: false Jul 6 23:27:22.490367 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:27:22.499094 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:27:22.507692 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:27:22.521271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:27:22.524909 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:27:22.525403 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:27:22.584059 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:27:22.588775 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:27:22.622775 jq[1979]: true Jul 6 23:27:22.634660 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:00 UTC 2025 (1): Starting Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:18:00 UTC 2025 (1): Starting Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: ---------------------------------------------------- Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: corporation. Support and training for ntp-4 are Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: available at https://www.nwtime.org/support Jul 6 23:27:22.636800 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: ---------------------------------------------------- Jul 6 23:27:22.634720 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 6 23:27:22.634740 ntpd[1970]: ---------------------------------------------------- Jul 6 23:27:22.634757 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Jul 6 23:27:22.634774 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 6 23:27:22.634792 ntpd[1970]: corporation. Support and training for ntp-4 are Jul 6 23:27:22.634810 ntpd[1970]: available at https://www.nwtime.org/support Jul 6 23:27:22.634826 ntpd[1970]: ---------------------------------------------------- Jul 6 23:27:22.638186 dbus-daemon[1965]: [system] SELinux support is enabled Jul 6 23:27:22.638557 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:27:22.655509 ntpd[1970]: proto: precision = 0.096 usec (-23) Jul 6 23:27:22.657278 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: proto: precision = 0.096 usec (-23) Jul 6 23:27:22.663761 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:27:22.668522 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: basedate set to 2025-06-24 Jul 6 23:27:22.668522 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: gps base set to 2025-06-29 (week 2373) Jul 6 23:27:22.659425 dbus-daemon[1965]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1805 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:27:22.663831 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:27:22.665695 ntpd[1970]: basedate set to 2025-06-24 Jul 6 23:27:22.666922 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:27:22.665733 ntpd[1970]: gps base set to 2025-06-29 (week 2373) Jul 6 23:27:22.666966 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:27:22.677734 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:27:22.683301 tar[1982]: linux-arm64/LICENSE Jul 6 23:27:22.683301 tar[1982]: linux-arm64/helm Jul 6 23:27:22.686860 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Listen normally on 3 eth0 172.31.19.251:123 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Listen normally on 4 lo [::1]:123 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: bind(21) AF_INET6 fe80::47b:9fff:fe2c:95bf%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: unable to create socket on eth0 (5) for fe80::47b:9fff:fe2c:95bf%2#123 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: failed to init interface for address fe80::47b:9fff:fe2c:95bf%2 Jul 6 23:27:22.692188 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: Listening on routing socket on fd #21 for interface updates Jul 6 23:27:22.690348 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 6 23:27:22.690689 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Jul 6 23:27:22.690762 ntpd[1970]: Listen normally on 3 eth0 172.31.19.251:123 Jul 6 23:27:22.690835 ntpd[1970]: Listen normally on 4 lo [::1]:123 Jul 6 23:27:22.690921 ntpd[1970]: bind(21) AF_INET6 fe80::47b:9fff:fe2c:95bf%2#123 flags 0x11 failed: Cannot assign requested address Jul 6 23:27:22.690961 ntpd[1970]: unable to create socket on eth0 (5) for fe80::47b:9fff:fe2c:95bf%2#123 Jul 6 23:27:22.690988 ntpd[1970]: failed to init interface for address fe80::47b:9fff:fe2c:95bf%2 Jul 6 23:27:22.691051 ntpd[1970]: Listening on routing socket on fd #21 for interface updates Jul 6 23:27:22.699297 extend-filesystems[1968]: Found /dev/nvme0n1p6 Jul 6 23:27:22.712708 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:27:22.737188 extend-filesystems[1968]: Found /dev/nvme0n1p9 Jul 6 23:27:22.748745 jq[2000]: true Jul 6 23:27:22.744009 (ntainerd)[2001]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:27:22.769716 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:22.774312 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:27:22.786456 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:22.786456 ntpd[1970]: 6 Jul 23:27:22 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:22.769785 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 6 23:27:22.786646 extend-filesystems[1968]: Checking size of /dev/nvme0n1p9 Jul 6 23:27:22.801467 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:27:22.801973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:27:22.844240 update_engine[1978]: I20250706 23:27:22.840304 1978 main.cc:92] Flatcar Update Engine starting Jul 6 23:27:22.871426 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:27:22.884000 update_engine[1978]: I20250706 23:27:22.880664 1978 update_check_scheduler.cc:74] Next update check in 3m40s Jul 6 23:27:22.876446 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:27:22.892142 extend-filesystems[1968]: Resized partition /dev/nvme0n1p9 Jul 6 23:27:22.914213 extend-filesystems[2026]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:27:22.926251 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 6 23:27:22.990250 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 6 23:27:23.052256 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 6 23:27:23.052351 coreos-metadata[1964]: Jul 06 23:27:23.052 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.052 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.056 INFO Fetch successful Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.056 INFO Fetch successful Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.060 INFO Fetch successful Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.060 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.061 INFO Fetch successful Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.061 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.063 INFO Fetch failed with 404: resource not found Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.069 INFO Fetch successful Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.069 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.071 INFO Fetch successful Jul 6 23:27:23.073188 coreos-metadata[1964]: Jul 06 23:27:23.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 6 23:27:23.076586 coreos-metadata[1964]: Jul 06 23:27:23.075 INFO Fetch successful Jul 6 23:27:23.076586 coreos-metadata[1964]: Jul 06 23:27:23.075 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 6 23:27:23.076790 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:27:23.076790 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:27:23.076790 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 6 23:27:23.099221 extend-filesystems[1968]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:27:23.083749 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:27:23.102788 coreos-metadata[1964]: Jul 06 23:27:23.081 INFO Fetch successful Jul 6 23:27:23.102788 coreos-metadata[1964]: Jul 06 23:27:23.081 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 6 23:27:23.102788 coreos-metadata[1964]: Jul 06 23:27:23.084 INFO Fetch successful Jul 6 23:27:23.085336 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:27:23.089372 systemd-logind[1977]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:27:23.089407 systemd-logind[1977]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 6 23:27:23.107989 systemd-logind[1977]: New seat seat0. Jul 6 23:27:23.118761 bash[2047]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:27:23.120894 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:27:23.125314 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:27:23.138563 systemd[1]: Starting sshkeys.service... Jul 6 23:27:23.287493 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:27:23.297047 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:27:23.354284 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:27:23.359817 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:27:23.404372 systemd-networkd[1805]: eth0: Gained IPv6LL Jul 6 23:27:23.419285 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:27:23.423944 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:27:23.436537 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 6 23:27:23.448071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:23.460487 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:27:23.554981 coreos-metadata[2067]: Jul 06 23:27:23.554 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 6 23:27:23.557124 coreos-metadata[2067]: Jul 06 23:27:23.556 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 6 23:27:23.562479 coreos-metadata[2067]: Jul 06 23:27:23.562 INFO Fetch successful Jul 6 23:27:23.563950 coreos-metadata[2067]: Jul 06 23:27:23.563 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:27:23.574085 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:27:23.576761 coreos-metadata[2067]: Jul 06 23:27:23.576 INFO Fetch successful Jul 6 23:27:23.592044 unknown[2067]: wrote ssh authorized keys file for user: core Jul 6 23:27:23.599600 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:27:23.606585 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2009 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:27:23.629589 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:27:23.734664 update-ssh-keys[2111]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:27:23.754106 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:27:23.772260 systemd[1]: Finished sshkeys.service. Jul 6 23:27:23.825295 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:27:23.837545 locksmithd[2025]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: Initializing new seelog logger Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: New Seelog Logger Creation Complete Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 processing appconfig overrides Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 processing appconfig overrides Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.939193 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.940019 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 processing appconfig overrides Jul 6 23:27:23.943393 amazon-ssm-agent[2090]: 2025-07-06 23:27:23.9386 INFO Proxy environment variables: Jul 6 23:27:23.949781 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.949781 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:23.949781 amazon-ssm-agent[2090]: 2025/07/06 23:27:23 processing appconfig overrides Jul 6 23:27:23.957525 containerd[2001]: time="2025-07-06T23:27:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:27:23.965667 containerd[2001]: time="2025-07-06T23:27:23.963829982Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:27:24.064322 containerd[2001]: time="2025-07-06T23:27:24.062990531Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.024µs" Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.064880375Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.064966535Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.065365643Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.065415143Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.065474171Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.065600543Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.065630951Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.066048707Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:27:24.068804 containerd[2001]: time="2025-07-06T23:27:24.066098819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:27:24.077193 containerd[2001]: time="2025-07-06T23:27:24.066132683Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:27:24.077193 containerd[2001]: time="2025-07-06T23:27:24.076257335Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:27:24.077193 containerd[2001]: time="2025-07-06T23:27:24.076503995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:27:24.077193 containerd[2001]: time="2025-07-06T23:27:24.076965023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:27:24.077193 containerd[2001]: time="2025-07-06T23:27:24.077044595Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:27:24.077193 containerd[2001]: time="2025-07-06T23:27:24.077072171Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:27:24.081322 containerd[2001]: time="2025-07-06T23:27:24.079742027Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:27:24.081502 amazon-ssm-agent[2090]: 2025-07-06 23:27:23.9386 INFO https_proxy: Jul 6 23:27:24.090552 containerd[2001]: time="2025-07-06T23:27:24.090463235Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:27:24.099194 containerd[2001]: time="2025-07-06T23:27:24.094466831Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118265903Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118389251Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118430747Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118460327Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118490591Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118535303Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118571423Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118601735Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118632083Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118659287Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118684427Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118717367Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.118981487Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:27:24.119188 containerd[2001]: time="2025-07-06T23:27:24.119028767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:27:24.119877 containerd[2001]: time="2025-07-06T23:27:24.119063735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:27:24.119877 containerd[2001]: time="2025-07-06T23:27:24.119094395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:27:24.119877 containerd[2001]: time="2025-07-06T23:27:24.119122607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:27:24.123199 containerd[2001]: time="2025-07-06T23:27:24.121332179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:27:24.123199 containerd[2001]: time="2025-07-06T23:27:24.122555231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:27:24.123199 containerd[2001]: time="2025-07-06T23:27:24.122622515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:27:24.123199 containerd[2001]: time="2025-07-06T23:27:24.122671355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:27:24.123199 containerd[2001]: time="2025-07-06T23:27:24.122707175Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:27:24.123199 containerd[2001]: time="2025-07-06T23:27:24.122737967Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:27:24.126396 containerd[2001]: time="2025-07-06T23:27:24.123136283Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:27:24.126724 containerd[2001]: time="2025-07-06T23:27:24.126669479Z" level=info msg="Start snapshots syncer" Jul 6 23:27:24.130206 containerd[2001]: time="2025-07-06T23:27:24.128486363Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:27:24.130206 containerd[2001]: time="2025-07-06T23:27:24.128968403Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129074507Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129285383Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129599255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129667595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129697151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129742691Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129780899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129811391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129840503Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129905951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129943463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.129974087Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.130053503Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:27:24.130557 containerd[2001]: time="2025-07-06T23:27:24.130089383Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:27:24.131103 containerd[2001]: time="2025-07-06T23:27:24.130112771Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.130138103Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138471155Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138515963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138556703Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138728543Z" level=info msg="runtime interface created" Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138748571Z" level=info msg="created NRI interface" Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138770687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138810947Z" level=info msg="Connect containerd service" Jul 6 23:27:24.141203 containerd[2001]: time="2025-07-06T23:27:24.138890987Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:27:24.157407 containerd[2001]: time="2025-07-06T23:27:24.154384787Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:27:24.185754 amazon-ssm-agent[2090]: 2025-07-06 23:27:23.9386 INFO http_proxy: Jul 6 23:27:24.285896 amazon-ssm-agent[2090]: 2025-07-06 23:27:23.9386 INFO no_proxy: Jul 6 23:27:24.386363 amazon-ssm-agent[2090]: 2025-07-06 23:27:23.9389 INFO Checking if agent identity type OnPrem can be assumed Jul 6 23:27:24.485007 amazon-ssm-agent[2090]: 2025-07-06 23:27:23.9390 INFO Checking if agent identity type EC2 can be assumed Jul 6 23:27:24.489729 polkitd[2113]: Started polkitd version 126 Jul 6 23:27:24.513100 polkitd[2113]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:27:24.513915 polkitd[2113]: Loading rules from directory /run/polkit-1/rules.d Jul 6 23:27:24.515332 polkitd[2113]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:27:24.516001 polkitd[2113]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 6 23:27:24.516054 polkitd[2113]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 6 23:27:24.517483 polkitd[2113]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:27:24.522108 polkitd[2113]: Finished loading, compiling and executing 2 rules Jul 6 23:27:24.531887 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:27:24.535855 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:27:24.545765 polkitd[2113]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:27:24.584267 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3003 INFO Agent will take identity from EC2 Jul 6 23:27:24.591709 containerd[2001]: time="2025-07-06T23:27:24.591380041Z" level=info msg="Start subscribing containerd event" Jul 6 23:27:24.591709 containerd[2001]: time="2025-07-06T23:27:24.591515689Z" level=info msg="Start recovering state" Jul 6 23:27:24.592557 containerd[2001]: time="2025-07-06T23:27:24.592370893Z" level=info msg="Start event monitor" Jul 6 23:27:24.592557 containerd[2001]: time="2025-07-06T23:27:24.592420645Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:27:24.592557 containerd[2001]: time="2025-07-06T23:27:24.592460245Z" level=info msg="Start streaming server" Jul 6 23:27:24.592557 containerd[2001]: time="2025-07-06T23:27:24.592480813Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:27:24.592557 containerd[2001]: time="2025-07-06T23:27:24.592496641Z" level=info msg="runtime interface starting up..." Jul 6 23:27:24.592557 containerd[2001]: time="2025-07-06T23:27:24.592510645Z" level=info msg="starting plugins..." Jul 6 23:27:24.592557 containerd[2001]: time="2025-07-06T23:27:24.592555993Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:27:24.594561 containerd[2001]: time="2025-07-06T23:27:24.593942605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:27:24.600175 containerd[2001]: time="2025-07-06T23:27:24.594792325Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:27:24.600175 containerd[2001]: time="2025-07-06T23:27:24.594978349Z" level=info msg="containerd successfully booted in 0.639245s" Jul 6 23:27:24.595068 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:27:24.607621 systemd-hostnamed[2009]: Hostname set to (transient) Jul 6 23:27:24.610437 systemd-resolved[1806]: System hostname changed to 'ip-172-31-19-251'. Jul 6 23:27:24.684596 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3044 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 6 23:27:24.693170 amazon-ssm-agent[2090]: 2025/07/06 23:27:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:24.693170 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 6 23:27:24.693170 amazon-ssm-agent[2090]: 2025/07/06 23:27:24 processing appconfig overrides Jul 6 23:27:24.736036 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3044 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3044 INFO [amazon-ssm-agent] Starting Core Agent Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3044 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3044 INFO [Registrar] Starting registrar module Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3078 INFO [EC2Identity] Checking disk for registration info Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3078 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.3078 INFO [EC2Identity] Generating registration keypair Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.6453 INFO [EC2Identity] Checking write access before registering Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.6461 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.6920 INFO [EC2Identity] EC2 registration was successful. Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.6921 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.6922 INFO [CredentialRefresher] credentialRefresher has started Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.6922 INFO [CredentialRefresher] Starting credentials refresher loop Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.7356 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 6 23:27:24.739610 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.7359 INFO [CredentialRefresher] Credentials ready Jul 6 23:27:24.783236 amazon-ssm-agent[2090]: 2025-07-06 23:27:24.7394 INFO [CredentialRefresher] Next credential rotation will be in 29.9999371009 minutes Jul 6 23:27:25.208005 sshd_keygen[1988]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:27:25.256235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:27:25.262803 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:27:25.267743 systemd[1]: Started sshd@0-172.31.19.251:22-139.178.89.65:58994.service - OpenSSH per-connection server daemon (139.178.89.65:58994). Jul 6 23:27:25.290960 tar[1982]: linux-arm64/README.md Jul 6 23:27:25.309714 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:27:25.310392 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:27:25.321775 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:27:25.346164 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:27:25.359390 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:27:25.368721 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:27:25.377915 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:27:25.380689 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:27:25.636846 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 58994 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:25.641295 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:25.642643 ntpd[1970]: 6 Jul 23:27:25 ntpd[1970]: Listen normally on 6 eth0 [fe80::47b:9fff:fe2c:95bf%2]:123 Jul 6 23:27:25.642031 ntpd[1970]: Listen normally on 6 eth0 [fe80::47b:9fff:fe2c:95bf%2]:123 Jul 6 23:27:25.659072 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:27:25.664556 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:27:25.691526 systemd-logind[1977]: New session 1 of user core. Jul 6 23:27:25.712698 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:27:25.722572 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:27:25.756503 (systemd)[2233]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:27:25.766814 systemd-logind[1977]: New session c1 of user core. Jul 6 23:27:25.777322 amazon-ssm-agent[2090]: 2025-07-06 23:27:25.7739 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 6 23:27:25.877647 amazon-ssm-agent[2090]: 2025-07-06 23:27:25.7784 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2237) started Jul 6 23:27:26.038905 amazon-ssm-agent[2090]: 2025-07-06 23:27:25.7784 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 6 23:27:26.158437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:26.162126 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:27:26.180052 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:26.220760 systemd[2233]: Queued start job for default target default.target. Jul 6 23:27:26.229382 systemd[2233]: Created slice app.slice - User Application Slice. Jul 6 23:27:26.229452 systemd[2233]: Reached target paths.target - Paths. Jul 6 23:27:26.229553 systemd[2233]: Reached target timers.target - Timers. Jul 6 23:27:26.232325 systemd[2233]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:27:26.264446 systemd[2233]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:27:26.264710 systemd[2233]: Reached target sockets.target - Sockets. Jul 6 23:27:26.264821 systemd[2233]: Reached target basic.target - Basic System. Jul 6 23:27:26.264908 systemd[2233]: Reached target default.target - Main User Target. Jul 6 23:27:26.264971 systemd[2233]: Startup finished in 475ms. Jul 6 23:27:26.266126 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:27:26.275094 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:27:26.281761 systemd[1]: Startup finished in 3.795s (kernel) + 9.554s (initrd) + 10.103s (userspace) = 23.452s. Jul 6 23:27:26.453643 systemd[1]: Started sshd@1-172.31.19.251:22-139.178.89.65:59008.service - OpenSSH per-connection server daemon (139.178.89.65:59008). Jul 6 23:27:26.653502 sshd[2272]: Accepted publickey for core from 139.178.89.65 port 59008 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:26.656804 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:26.668536 systemd-logind[1977]: New session 2 of user core. Jul 6 23:27:26.679474 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:27:26.807902 sshd[2274]: Connection closed by 139.178.89.65 port 59008 Jul 6 23:27:26.808222 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:26.817486 systemd[1]: sshd@1-172.31.19.251:22-139.178.89.65:59008.service: Deactivated successfully. Jul 6 23:27:26.821198 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:27:26.823258 systemd-logind[1977]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:27:26.827877 systemd-logind[1977]: Removed session 2. Jul 6 23:27:26.847650 systemd[1]: Started sshd@2-172.31.19.251:22-139.178.89.65:59014.service - OpenSSH per-connection server daemon (139.178.89.65:59014). Jul 6 23:27:27.045239 sshd[2282]: Accepted publickey for core from 139.178.89.65 port 59014 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:27.048474 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:27.058775 systemd-logind[1977]: New session 3 of user core. Jul 6 23:27:27.067466 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:27:27.170143 kubelet[2253]: E0706 23:27:27.170044 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:27.174748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:27.175061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:27.176323 systemd[1]: kubelet.service: Consumed 1.524s CPU time, 259.5M memory peak. Jul 6 23:27:27.190215 sshd[2284]: Connection closed by 139.178.89.65 port 59014 Jul 6 23:27:27.190953 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:27.197823 systemd[1]: sshd@2-172.31.19.251:22-139.178.89.65:59014.service: Deactivated successfully. Jul 6 23:27:27.201462 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:27:27.203534 systemd-logind[1977]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:27:27.206825 systemd-logind[1977]: Removed session 3. Jul 6 23:27:27.228634 systemd[1]: Started sshd@3-172.31.19.251:22-139.178.89.65:59016.service - OpenSSH per-connection server daemon (139.178.89.65:59016). Jul 6 23:27:27.419869 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 59016 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:27.422479 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:27.430442 systemd-logind[1977]: New session 4 of user core. Jul 6 23:27:27.435413 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:27:27.563021 sshd[2293]: Connection closed by 139.178.89.65 port 59016 Jul 6 23:27:27.562791 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:27.569608 systemd[1]: sshd@3-172.31.19.251:22-139.178.89.65:59016.service: Deactivated successfully. Jul 6 23:27:27.572211 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:27:27.573965 systemd-logind[1977]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:27:27.578183 systemd-logind[1977]: Removed session 4. Jul 6 23:27:27.599737 systemd[1]: Started sshd@4-172.31.19.251:22-139.178.89.65:59018.service - OpenSSH per-connection server daemon (139.178.89.65:59018). Jul 6 23:27:27.814311 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 59018 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:27.816761 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:27.826813 systemd-logind[1977]: New session 5 of user core. Jul 6 23:27:27.835417 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:27:27.967011 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:27:27.967663 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:27.982728 sudo[2302]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:28.009124 sshd[2301]: Connection closed by 139.178.89.65 port 59018 Jul 6 23:27:28.007786 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:28.015488 systemd[1]: sshd@4-172.31.19.251:22-139.178.89.65:59018.service: Deactivated successfully. Jul 6 23:27:28.018959 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:27:28.020573 systemd-logind[1977]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:27:28.024482 systemd-logind[1977]: Removed session 5. Jul 6 23:27:28.049471 systemd[1]: Started sshd@5-172.31.19.251:22-139.178.89.65:59026.service - OpenSSH per-connection server daemon (139.178.89.65:59026). Jul 6 23:27:28.253266 sshd[2308]: Accepted publickey for core from 139.178.89.65 port 59026 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:28.255740 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:28.263785 systemd-logind[1977]: New session 6 of user core. Jul 6 23:27:28.277422 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:27:28.381173 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:27:28.381770 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:28.392447 sudo[2312]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:28.404295 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:27:28.404979 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:28.422859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:28.489885 augenrules[2334]: No rules Jul 6 23:27:28.492445 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:28.492891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:28.496260 sudo[2311]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:28.520361 sshd[2310]: Connection closed by 139.178.89.65 port 59026 Jul 6 23:27:28.519410 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:28.527314 systemd[1]: sshd@5-172.31.19.251:22-139.178.89.65:59026.service: Deactivated successfully. Jul 6 23:27:28.530717 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:27:28.532285 systemd-logind[1977]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:27:28.535030 systemd-logind[1977]: Removed session 6. Jul 6 23:27:28.557640 systemd[1]: Started sshd@6-172.31.19.251:22-139.178.89.65:59036.service - OpenSSH per-connection server daemon (139.178.89.65:59036). Jul 6 23:27:28.756799 sshd[2343]: Accepted publickey for core from 139.178.89.65 port 59036 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:27:28.759401 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:28.769254 systemd-logind[1977]: New session 7 of user core. Jul 6 23:27:28.774460 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:27:28.881539 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:27:28.882248 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:29.470152 systemd-resolved[1806]: Clock change detected. Flushing caches. Jul 6 23:27:29.507249 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:27:29.522083 (dockerd)[2364]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:27:30.061363 dockerd[2364]: time="2025-07-06T23:27:30.061081242Z" level=info msg="Starting up" Jul 6 23:27:30.064561 dockerd[2364]: time="2025-07-06T23:27:30.064315650Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:27:30.227109 dockerd[2364]: time="2025-07-06T23:27:30.226892395Z" level=info msg="Loading containers: start." Jul 6 23:27:30.273318 kernel: Initializing XFRM netlink socket Jul 6 23:27:30.662435 (udev-worker)[2387]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:27:30.737071 systemd-networkd[1805]: docker0: Link UP Jul 6 23:27:30.742348 dockerd[2364]: time="2025-07-06T23:27:30.742272550Z" level=info msg="Loading containers: done." Jul 6 23:27:30.769577 dockerd[2364]: time="2025-07-06T23:27:30.769450702Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:27:30.769805 dockerd[2364]: time="2025-07-06T23:27:30.769635298Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:27:30.769866 dockerd[2364]: time="2025-07-06T23:27:30.769826266Z" level=info msg="Initializing buildkit" Jul 6 23:27:30.820962 dockerd[2364]: time="2025-07-06T23:27:30.820524874Z" level=info msg="Completed buildkit initialization" Jul 6 23:27:30.834574 dockerd[2364]: time="2025-07-06T23:27:30.834501142Z" level=info msg="Daemon has completed initialization" Jul 6 23:27:30.834907 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:27:30.836486 dockerd[2364]: time="2025-07-06T23:27:30.835640542Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:27:31.104624 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1488879036-merged.mount: Deactivated successfully. Jul 6 23:27:31.758528 containerd[2001]: time="2025-07-06T23:27:31.758470031Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:27:32.356145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231827745.mount: Deactivated successfully. Jul 6 23:27:33.794371 containerd[2001]: time="2025-07-06T23:27:33.794313577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:33.797390 containerd[2001]: time="2025-07-06T23:27:33.797338393Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 6 23:27:33.800025 containerd[2001]: time="2025-07-06T23:27:33.799949845Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:33.805440 containerd[2001]: time="2025-07-06T23:27:33.805360417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:33.807713 containerd[2001]: time="2025-07-06T23:27:33.807263113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.048148154s" Jul 6 23:27:33.807713 containerd[2001]: time="2025-07-06T23:27:33.807319885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:27:33.812213 containerd[2001]: time="2025-07-06T23:27:33.810415501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:27:35.203357 containerd[2001]: time="2025-07-06T23:27:35.203275212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:35.205522 containerd[2001]: time="2025-07-06T23:27:35.205451496Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 6 23:27:35.207654 containerd[2001]: time="2025-07-06T23:27:35.207582552Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:35.212682 containerd[2001]: time="2025-07-06T23:27:35.212601612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:35.214869 containerd[2001]: time="2025-07-06T23:27:35.214381008Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.401952483s" Jul 6 23:27:35.214869 containerd[2001]: time="2025-07-06T23:27:35.214439016Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:27:35.215136 containerd[2001]: time="2025-07-06T23:27:35.215085084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:27:36.437043 containerd[2001]: time="2025-07-06T23:27:36.436961402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:36.439724 containerd[2001]: time="2025-07-06T23:27:36.439653014Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 6 23:27:36.440952 containerd[2001]: time="2025-07-06T23:27:36.440893910Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:36.446571 containerd[2001]: time="2025-07-06T23:27:36.446481026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:36.448543 containerd[2001]: time="2025-07-06T23:27:36.448296770Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.23315261s" Jul 6 23:27:36.448543 containerd[2001]: time="2025-07-06T23:27:36.448356890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:27:36.448997 containerd[2001]: time="2025-07-06T23:27:36.448946054Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:27:37.048517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:37.052759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:37.433408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:37.450888 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:37.555037 kubelet[2643]: E0706 23:27:37.554954 2643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:37.566726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:37.567052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:37.569702 systemd[1]: kubelet.service: Consumed 334ms CPU time, 107.2M memory peak. Jul 6 23:27:37.920546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807433797.mount: Deactivated successfully. Jul 6 23:27:38.520220 containerd[2001]: time="2025-07-06T23:27:38.520135000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:38.523149 containerd[2001]: time="2025-07-06T23:27:38.523097920Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 6 23:27:38.525454 containerd[2001]: time="2025-07-06T23:27:38.525376828Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:38.530642 containerd[2001]: time="2025-07-06T23:27:38.529586524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:38.531122 containerd[2001]: time="2025-07-06T23:27:38.531070457Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.082067511s" Jul 6 23:27:38.531266 containerd[2001]: time="2025-07-06T23:27:38.531238565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:27:38.532011 containerd[2001]: time="2025-07-06T23:27:38.531918653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:27:39.101945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330367915.mount: Deactivated successfully. Jul 6 23:27:40.438281 containerd[2001]: time="2025-07-06T23:27:40.437574834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:40.440206 containerd[2001]: time="2025-07-06T23:27:40.440102298Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 6 23:27:40.443038 containerd[2001]: time="2025-07-06T23:27:40.442943106Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:40.449764 containerd[2001]: time="2025-07-06T23:27:40.449633694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:40.452114 containerd[2001]: time="2025-07-06T23:27:40.451830234Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.919846229s" Jul 6 23:27:40.452114 containerd[2001]: time="2025-07-06T23:27:40.451899426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:27:40.452588 containerd[2001]: time="2025-07-06T23:27:40.452517990Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:27:40.958612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294645186.mount: Deactivated successfully. Jul 6 23:27:40.971962 containerd[2001]: time="2025-07-06T23:27:40.971899257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:40.973826 containerd[2001]: time="2025-07-06T23:27:40.973762149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:27:40.976770 containerd[2001]: time="2025-07-06T23:27:40.976688577Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:40.982114 containerd[2001]: time="2025-07-06T23:27:40.982051749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:27:40.983745 containerd[2001]: time="2025-07-06T23:27:40.983664333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 531.080907ms" Jul 6 23:27:40.983745 containerd[2001]: time="2025-07-06T23:27:40.983732637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:27:40.984596 containerd[2001]: time="2025-07-06T23:27:40.984516045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:27:41.523862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561826524.mount: Deactivated successfully. Jul 6 23:27:43.687266 containerd[2001]: time="2025-07-06T23:27:43.686897326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:43.689154 containerd[2001]: time="2025-07-06T23:27:43.689068750Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 6 23:27:43.691919 containerd[2001]: time="2025-07-06T23:27:43.691826074Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:43.698312 containerd[2001]: time="2025-07-06T23:27:43.698151430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:43.701326 containerd[2001]: time="2025-07-06T23:27:43.700641442Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.716058617s" Jul 6 23:27:43.701326 containerd[2001]: time="2025-07-06T23:27:43.700710154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:27:47.651436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:27:47.657518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:48.027466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:48.040761 (kubelet)[2794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:48.124499 kubelet[2794]: E0706 23:27:48.124422 2794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:48.129464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:48.130867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:48.132441 systemd[1]: kubelet.service: Consumed 321ms CPU time, 106.8M memory peak. Jul 6 23:27:51.598359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:51.598686 systemd[1]: kubelet.service: Consumed 321ms CPU time, 106.8M memory peak. Jul 6 23:27:51.602737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:51.653620 systemd[1]: Reload requested from client PID 2808 ('systemctl') (unit session-7.scope)... Jul 6 23:27:51.653658 systemd[1]: Reloading... Jul 6 23:27:51.895272 zram_generator::config[2849]: No configuration found. Jul 6 23:27:52.111829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:52.381774 systemd[1]: Reloading finished in 727 ms. Jul 6 23:27:52.498287 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:27:52.498579 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:27:52.499382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:52.499523 systemd[1]: kubelet.service: Consumed 234ms CPU time, 95.2M memory peak. Jul 6 23:27:52.504680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:52.870813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:52.885932 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:27:52.965779 kubelet[2917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:27:52.967218 kubelet[2917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:27:52.967218 kubelet[2917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:27:52.967218 kubelet[2917]: I0706 23:27:52.966469 2917 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:27:54.044643 kubelet[2917]: I0706 23:27:54.044509 2917 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:27:54.046720 kubelet[2917]: I0706 23:27:54.044917 2917 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:27:54.046720 kubelet[2917]: I0706 23:27:54.045684 2917 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:27:54.103637 kubelet[2917]: E0706 23:27:54.103544 2917 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.251:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.251:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:27:54.107666 kubelet[2917]: I0706 23:27:54.107600 2917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:27:54.122312 kubelet[2917]: I0706 23:27:54.122219 2917 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:27:54.128416 kubelet[2917]: I0706 23:27:54.128356 2917 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:27:54.129091 kubelet[2917]: I0706 23:27:54.129036 2917 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:27:54.129402 kubelet[2917]: I0706 23:27:54.129089 2917 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-251","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:27:54.129628 kubelet[2917]: I0706 23:27:54.129581 2917 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:27:54.129628 kubelet[2917]: I0706 23:27:54.129624 2917 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:27:54.130045 kubelet[2917]: I0706 23:27:54.130003 2917 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:27:54.140874 kubelet[2917]: I0706 23:27:54.140519 2917 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:27:54.140874 kubelet[2917]: I0706 23:27:54.140590 2917 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:27:54.140874 kubelet[2917]: I0706 23:27:54.140661 2917 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:27:54.147917 kubelet[2917]: I0706 23:27:54.147843 2917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:27:54.151156 kubelet[2917]: E0706 23:27:54.151039 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-251&limit=500&resourceVersion=0\": dial tcp 172.31.19.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:27:54.151793 kubelet[2917]: E0706 23:27:54.151605 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:27:54.154225 kubelet[2917]: I0706 23:27:54.152103 2917 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:27:54.154225 kubelet[2917]: I0706 23:27:54.153382 2917 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:27:54.154225 kubelet[2917]: W0706 23:27:54.153619 2917 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:27:54.163449 kubelet[2917]: I0706 23:27:54.163408 2917 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:27:54.163701 kubelet[2917]: I0706 23:27:54.163677 2917 server.go:1289] "Started kubelet" Jul 6 23:27:54.175532 kubelet[2917]: E0706 23:27:54.173098 2917 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.251:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-251.184fcd448040b672 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-251,UID:ip-172-31-19-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-251,},FirstTimestamp:2025-07-06 23:27:54.163607154 +0000 UTC m=+1.270003819,LastTimestamp:2025-07-06 23:27:54.163607154 +0000 UTC m=+1.270003819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-251,}" Jul 6 23:27:54.179226 kubelet[2917]: I0706 23:27:54.179073 2917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:27:54.179773 kubelet[2917]: I0706 23:27:54.179725 2917 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:27:54.179874 kubelet[2917]: I0706 23:27:54.179844 2917 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:27:54.181833 kubelet[2917]: I0706 23:27:54.181759 2917 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:27:54.186257 kubelet[2917]: I0706 23:27:54.186146 2917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:27:54.188420 kubelet[2917]: I0706 23:27:54.188386 2917 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:27:54.189488 kubelet[2917]: E0706 23:27:54.189424 2917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-251\" not found" Jul 6 23:27:54.192854 kubelet[2917]: I0706 23:27:54.191587 2917 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:27:54.192854 kubelet[2917]: I0706 23:27:54.188444 2917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:27:54.193810 kubelet[2917]: E0706 23:27:54.193746 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:27:54.193810 kubelet[2917]: I0706 23:27:54.191676 2917 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:27:54.195173 kubelet[2917]: E0706 23:27:54.195072 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-251?timeout=10s\": dial tcp 172.31.19.251:6443: connect: connection refused" interval="200ms" Jul 6 23:27:54.198957 kubelet[2917]: E0706 23:27:54.198875 2917 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:27:54.202746 kubelet[2917]: I0706 23:27:54.202687 2917 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:27:54.202746 kubelet[2917]: I0706 23:27:54.202731 2917 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:27:54.202971 kubelet[2917]: I0706 23:27:54.202880 2917 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:27:54.240250 kubelet[2917]: I0706 23:27:54.239201 2917 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:27:54.240250 kubelet[2917]: I0706 23:27:54.239294 2917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:27:54.240250 kubelet[2917]: I0706 23:27:54.239333 2917 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:27:54.248589 kubelet[2917]: I0706 23:27:54.248498 2917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:27:54.251791 kubelet[2917]: I0706 23:27:54.251067 2917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:27:54.251791 kubelet[2917]: I0706 23:27:54.251115 2917 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:27:54.251791 kubelet[2917]: I0706 23:27:54.251150 2917 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:27:54.251791 kubelet[2917]: I0706 23:27:54.251165 2917 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:27:54.251791 kubelet[2917]: E0706 23:27:54.251345 2917 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:27:54.255226 kubelet[2917]: I0706 23:27:54.254524 2917 policy_none.go:49] "None policy: Start" Jul 6 23:27:54.255226 kubelet[2917]: I0706 23:27:54.254572 2917 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:27:54.255226 kubelet[2917]: I0706 23:27:54.254599 2917 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:27:54.255984 kubelet[2917]: E0706 23:27:54.255844 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:27:54.271993 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:27:54.291215 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:27:54.293452 kubelet[2917]: E0706 23:27:54.293373 2917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-251\" not found" Jul 6 23:27:54.300348 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:27:54.316122 kubelet[2917]: E0706 23:27:54.316071 2917 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:27:54.317734 kubelet[2917]: I0706 23:27:54.316759 2917 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:27:54.317734 kubelet[2917]: I0706 23:27:54.316800 2917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:27:54.317734 kubelet[2917]: I0706 23:27:54.317225 2917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:27:54.321126 kubelet[2917]: E0706 23:27:54.320923 2917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:27:54.321688 kubelet[2917]: E0706 23:27:54.321632 2917 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-251\" not found" Jul 6 23:27:54.378098 systemd[1]: Created slice kubepods-burstable-pod08202881c03dfca88772fb0ac1250cba.slice - libcontainer container kubepods-burstable-pod08202881c03dfca88772fb0ac1250cba.slice. Jul 6 23:27:54.392672 kubelet[2917]: E0706 23:27:54.392261 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:54.396069 kubelet[2917]: E0706 23:27:54.396007 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-251?timeout=10s\": dial tcp 172.31.19.251:6443: connect: connection refused" interval="400ms" Jul 6 23:27:54.396502 kubelet[2917]: I0706 23:27:54.396367 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08202881c03dfca88772fb0ac1250cba-ca-certs\") pod \"kube-apiserver-ip-172-31-19-251\" (UID: \"08202881c03dfca88772fb0ac1250cba\") " pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:27:54.396754 kubelet[2917]: I0706 23:27:54.396720 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08202881c03dfca88772fb0ac1250cba-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-251\" (UID: \"08202881c03dfca88772fb0ac1250cba\") " pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:27:54.396969 kubelet[2917]: I0706 23:27:54.396936 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:27:54.397163 kubelet[2917]: I0706 23:27:54.397131 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:27:54.397399 kubelet[2917]: I0706 23:27:54.397339 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b72fb8d9b9754d168645413171be4543-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-251\" (UID: \"b72fb8d9b9754d168645413171be4543\") " pod="kube-system/kube-scheduler-ip-172-31-19-251" Jul 6 23:27:54.397797 kubelet[2917]: I0706 23:27:54.397536 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08202881c03dfca88772fb0ac1250cba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-251\" (UID: \"08202881c03dfca88772fb0ac1250cba\") " pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:27:54.397797 kubelet[2917]: I0706 23:27:54.397582 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:27:54.397797 kubelet[2917]: I0706 23:27:54.397627 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:27:54.397797 kubelet[2917]: I0706 23:27:54.397662 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:27:54.400483 systemd[1]: Created slice kubepods-burstable-pod138f7cc423bd3b3fd275f9e3c0cc0380.slice - libcontainer container kubepods-burstable-pod138f7cc423bd3b3fd275f9e3c0cc0380.slice. Jul 6 23:27:54.413083 kubelet[2917]: E0706 23:27:54.413033 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:54.420688 systemd[1]: Created slice kubepods-burstable-podb72fb8d9b9754d168645413171be4543.slice - libcontainer container kubepods-burstable-podb72fb8d9b9754d168645413171be4543.slice. Jul 6 23:27:54.422872 kubelet[2917]: I0706 23:27:54.422833 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-251" Jul 6 23:27:54.424241 kubelet[2917]: E0706 23:27:54.423905 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.251:6443/api/v1/nodes\": dial tcp 172.31.19.251:6443: connect: connection refused" node="ip-172-31-19-251" Jul 6 23:27:54.428250 kubelet[2917]: E0706 23:27:54.428153 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:54.454834 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:27:54.627580 kubelet[2917]: I0706 23:27:54.627377 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-251" Jul 6 23:27:54.628398 kubelet[2917]: E0706 23:27:54.628322 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.251:6443/api/v1/nodes\": dial tcp 172.31.19.251:6443: connect: connection refused" node="ip-172-31-19-251" Jul 6 23:27:54.693943 containerd[2001]: time="2025-07-06T23:27:54.693871365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-251,Uid:08202881c03dfca88772fb0ac1250cba,Namespace:kube-system,Attempt:0,}" Jul 6 23:27:54.715229 containerd[2001]: time="2025-07-06T23:27:54.714863781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-251,Uid:138f7cc423bd3b3fd275f9e3c0cc0380,Namespace:kube-system,Attempt:0,}" Jul 6 23:27:54.739686 containerd[2001]: time="2025-07-06T23:27:54.739623237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-251,Uid:b72fb8d9b9754d168645413171be4543,Namespace:kube-system,Attempt:0,}" Jul 6 23:27:54.751713 containerd[2001]: time="2025-07-06T23:27:54.751648257Z" level=info msg="connecting to shim a23226e9bd6305d3276f57180055db17edf41f5ff0d60140ccd36b9c2fe624db" address="unix:///run/containerd/s/61a6f073b08d4146c54511c4676642e615448b02f97d38d5adc914ee413f73d7" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:27:54.798961 kubelet[2917]: E0706 23:27:54.798789 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-251?timeout=10s\": dial tcp 172.31.19.251:6443: connect: connection refused" interval="800ms" Jul 6 23:27:54.813944 containerd[2001]: time="2025-07-06T23:27:54.813237573Z" level=info msg="connecting to shim aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1" address="unix:///run/containerd/s/1cb451ca3542fec1e4e0f5d6b4783bbad4ab61180c076621172584b1dbf5e7dc" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:27:54.816541 systemd[1]: Started cri-containerd-a23226e9bd6305d3276f57180055db17edf41f5ff0d60140ccd36b9c2fe624db.scope - libcontainer container a23226e9bd6305d3276f57180055db17edf41f5ff0d60140ccd36b9c2fe624db. Jul 6 23:27:54.871371 containerd[2001]: time="2025-07-06T23:27:54.870971530Z" level=info msg="connecting to shim e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6" address="unix:///run/containerd/s/36d92f1eb715be9b29306179fa7f553a7d01f4e8171efaf2e2ea070789869f5f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:27:54.917560 systemd[1]: Started cri-containerd-aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1.scope - libcontainer container aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1. Jul 6 23:27:54.959517 systemd[1]: Started cri-containerd-e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6.scope - libcontainer container e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6. Jul 6 23:27:54.971991 containerd[2001]: time="2025-07-06T23:27:54.971923066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-251,Uid:08202881c03dfca88772fb0ac1250cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23226e9bd6305d3276f57180055db17edf41f5ff0d60140ccd36b9c2fe624db\"" Jul 6 23:27:54.986935 containerd[2001]: time="2025-07-06T23:27:54.986780254Z" level=info msg="CreateContainer within sandbox \"a23226e9bd6305d3276f57180055db17edf41f5ff0d60140ccd36b9c2fe624db\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:27:55.009168 containerd[2001]: time="2025-07-06T23:27:55.009013494Z" level=info msg="Container e9ed56c6191f2f2b1649cbd4e030be5ed576c8e0bb29a3bcc91e713c8f1c9d29: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:55.031433 containerd[2001]: time="2025-07-06T23:27:55.031346934Z" level=info msg="CreateContainer within sandbox \"a23226e9bd6305d3276f57180055db17edf41f5ff0d60140ccd36b9c2fe624db\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9ed56c6191f2f2b1649cbd4e030be5ed576c8e0bb29a3bcc91e713c8f1c9d29\"" Jul 6 23:27:55.035473 kubelet[2917]: I0706 23:27:55.035388 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-251" Jul 6 23:27:55.036659 kubelet[2917]: E0706 23:27:55.036572 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.251:6443/api/v1/nodes\": dial tcp 172.31.19.251:6443: connect: connection refused" node="ip-172-31-19-251" Jul 6 23:27:55.036933 containerd[2001]: time="2025-07-06T23:27:55.036832902Z" level=info msg="StartContainer for \"e9ed56c6191f2f2b1649cbd4e030be5ed576c8e0bb29a3bcc91e713c8f1c9d29\"" Jul 6 23:27:55.045976 containerd[2001]: time="2025-07-06T23:27:55.042387307Z" level=info msg="connecting to shim e9ed56c6191f2f2b1649cbd4e030be5ed576c8e0bb29a3bcc91e713c8f1c9d29" address="unix:///run/containerd/s/61a6f073b08d4146c54511c4676642e615448b02f97d38d5adc914ee413f73d7" protocol=ttrpc version=3 Jul 6 23:27:55.086575 containerd[2001]: time="2025-07-06T23:27:55.086388559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-251,Uid:138f7cc423bd3b3fd275f9e3c0cc0380,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1\"" Jul 6 23:27:55.099501 containerd[2001]: time="2025-07-06T23:27:55.099441055Z" level=info msg="CreateContainer within sandbox \"aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:27:55.101466 containerd[2001]: time="2025-07-06T23:27:55.101396179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-251,Uid:b72fb8d9b9754d168645413171be4543,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6\"" Jul 6 23:27:55.120103 containerd[2001]: time="2025-07-06T23:27:55.120042451Z" level=info msg="Container 9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:55.122076 systemd[1]: Started cri-containerd-e9ed56c6191f2f2b1649cbd4e030be5ed576c8e0bb29a3bcc91e713c8f1c9d29.scope - libcontainer container e9ed56c6191f2f2b1649cbd4e030be5ed576c8e0bb29a3bcc91e713c8f1c9d29. Jul 6 23:27:55.130338 containerd[2001]: time="2025-07-06T23:27:55.129773935Z" level=info msg="CreateContainer within sandbox \"e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:27:55.144161 containerd[2001]: time="2025-07-06T23:27:55.144090199Z" level=info msg="CreateContainer within sandbox \"aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e\"" Jul 6 23:27:55.147004 containerd[2001]: time="2025-07-06T23:27:55.146939215Z" level=info msg="StartContainer for \"9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e\"" Jul 6 23:27:55.157397 containerd[2001]: time="2025-07-06T23:27:55.157305247Z" level=info msg="connecting to shim 9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e" address="unix:///run/containerd/s/1cb451ca3542fec1e4e0f5d6b4783bbad4ab61180c076621172584b1dbf5e7dc" protocol=ttrpc version=3 Jul 6 23:27:55.163947 containerd[2001]: time="2025-07-06T23:27:55.163879591Z" level=info msg="Container 4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:27:55.197612 containerd[2001]: time="2025-07-06T23:27:55.197454343Z" level=info msg="CreateContainer within sandbox \"e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f\"" Jul 6 23:27:55.199175 containerd[2001]: time="2025-07-06T23:27:55.199008187Z" level=info msg="StartContainer for \"4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f\"" Jul 6 23:27:55.203877 containerd[2001]: time="2025-07-06T23:27:55.203799799Z" level=info msg="connecting to shim 4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f" address="unix:///run/containerd/s/36d92f1eb715be9b29306179fa7f553a7d01f4e8171efaf2e2ea070789869f5f" protocol=ttrpc version=3 Jul 6 23:27:55.212087 kubelet[2917]: E0706 23:27:55.211994 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:27:55.219727 systemd[1]: Started cri-containerd-9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e.scope - libcontainer container 9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e. Jul 6 23:27:55.280743 systemd[1]: Started cri-containerd-4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f.scope - libcontainer container 4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f. Jul 6 23:27:55.286683 containerd[2001]: time="2025-07-06T23:27:55.286500008Z" level=info msg="StartContainer for \"e9ed56c6191f2f2b1649cbd4e030be5ed576c8e0bb29a3bcc91e713c8f1c9d29\" returns successfully" Jul 6 23:27:55.308502 kubelet[2917]: E0706 23:27:55.308079 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:55.395312 containerd[2001]: time="2025-07-06T23:27:55.394023008Z" level=info msg="StartContainer for \"9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e\" returns successfully" Jul 6 23:27:55.532561 containerd[2001]: time="2025-07-06T23:27:55.532167477Z" level=info msg="StartContainer for \"4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f\" returns successfully" Jul 6 23:27:55.840382 kubelet[2917]: I0706 23:27:55.840124 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-251" Jul 6 23:27:56.315866 kubelet[2917]: E0706 23:27:56.315148 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:56.319157 kubelet[2917]: E0706 23:27:56.319049 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:56.319512 kubelet[2917]: E0706 23:27:56.319481 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:57.322220 kubelet[2917]: E0706 23:27:57.320642 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:57.323239 kubelet[2917]: E0706 23:27:57.320766 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:58.325377 kubelet[2917]: E0706 23:27:58.325316 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:58.326151 kubelet[2917]: E0706 23:27:58.326088 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:59.328098 kubelet[2917]: E0706 23:27:59.328047 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-251\" not found" node="ip-172-31-19-251" Jul 6 23:27:59.773147 kubelet[2917]: I0706 23:27:59.772478 2917 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-251" Jul 6 23:27:59.793021 kubelet[2917]: I0706 23:27:59.792282 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:27:59.833882 kubelet[2917]: E0706 23:27:59.833503 2917 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-251.184fcd448040b672 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-251,UID:ip-172-31-19-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-251,},FirstTimestamp:2025-07-06 23:27:54.163607154 +0000 UTC m=+1.270003819,LastTimestamp:2025-07-06 23:27:54.163607154 +0000 UTC m=+1.270003819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-251,}" Jul 6 23:27:59.908993 kubelet[2917]: E0706 23:27:59.908946 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:27:59.910578 kubelet[2917]: I0706 23:27:59.910231 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:27:59.917972 kubelet[2917]: E0706 23:27:59.917925 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:27:59.918468 kubelet[2917]: I0706 23:27:59.918167 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-251" Jul 6 23:27:59.924611 kubelet[2917]: E0706 23:27:59.924567 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Jul 6 23:27:59.929761 kubelet[2917]: E0706 23:27:59.929700 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-251" Jul 6 23:28:00.153846 kubelet[2917]: I0706 23:28:00.153534 2917 apiserver.go:52] "Watching apiserver" Jul 6 23:28:00.193160 kubelet[2917]: I0706 23:28:00.193118 2917 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:02.392561 systemd[1]: Reload requested from client PID 3201 ('systemctl') (unit session-7.scope)... Jul 6 23:28:02.392603 systemd[1]: Reloading... Jul 6 23:28:02.616235 zram_generator::config[3251]: No configuration found. Jul 6 23:28:02.803747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:03.117456 systemd[1]: Reloading finished in 724 ms. Jul 6 23:28:03.171506 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:03.182837 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:28:03.184367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:03.184485 systemd[1]: kubelet.service: Consumed 2.112s CPU time, 126.4M memory peak. Jul 6 23:28:03.189934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:03.599292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:03.619894 (kubelet)[3305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:03.717247 kubelet[3305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:03.717247 kubelet[3305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:28:03.717247 kubelet[3305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:03.717247 kubelet[3305]: I0706 23:28:03.717063 3305 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:28:03.732742 kubelet[3305]: I0706 23:28:03.732534 3305 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:28:03.733519 kubelet[3305]: I0706 23:28:03.733281 3305 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:28:03.733793 kubelet[3305]: I0706 23:28:03.733738 3305 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:28:03.736402 kubelet[3305]: I0706 23:28:03.736272 3305 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:28:03.741519 kubelet[3305]: I0706 23:28:03.741465 3305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:03.753759 kubelet[3305]: I0706 23:28:03.753722 3305 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:28:03.773932 kubelet[3305]: I0706 23:28:03.773450 3305 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:28:03.774690 kubelet[3305]: I0706 23:28:03.774628 3305 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:28:03.776041 kubelet[3305]: I0706 23:28:03.774841 3305 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-251","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:28:03.776041 kubelet[3305]: I0706 23:28:03.775901 3305 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:28:03.776041 kubelet[3305]: I0706 23:28:03.775923 3305 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:28:03.776041 kubelet[3305]: I0706 23:28:03.776011 3305 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:03.778229 kubelet[3305]: I0706 23:28:03.777001 3305 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:28:03.778229 kubelet[3305]: I0706 23:28:03.777079 3305 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:28:03.778229 kubelet[3305]: I0706 23:28:03.777128 3305 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:28:03.778229 kubelet[3305]: I0706 23:28:03.777158 3305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:28:03.784976 kubelet[3305]: I0706 23:28:03.784923 3305 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:28:03.788038 kubelet[3305]: I0706 23:28:03.787524 3305 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:28:03.796471 kubelet[3305]: I0706 23:28:03.796398 3305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:28:03.796593 kubelet[3305]: I0706 23:28:03.796484 3305 server.go:1289] "Started kubelet" Jul 6 23:28:03.797140 kubelet[3305]: I0706 23:28:03.797076 3305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:28:03.799890 kubelet[3305]: I0706 23:28:03.799837 3305 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:28:03.800244 kubelet[3305]: I0706 23:28:03.800171 3305 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:28:03.804529 kubelet[3305]: I0706 23:28:03.804493 3305 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:28:03.808576 kubelet[3305]: I0706 23:28:03.808520 3305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:28:03.821078 kubelet[3305]: I0706 23:28:03.821015 3305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:28:03.824812 kubelet[3305]: I0706 23:28:03.824763 3305 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:28:03.825161 kubelet[3305]: E0706 23:28:03.825119 3305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-251\" not found" Jul 6 23:28:03.829773 kubelet[3305]: I0706 23:28:03.828924 3305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:28:03.829773 kubelet[3305]: I0706 23:28:03.829144 3305 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:28:03.846619 kubelet[3305]: I0706 23:28:03.846453 3305 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:28:03.851310 kubelet[3305]: I0706 23:28:03.851132 3305 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:28:03.874688 kubelet[3305]: E0706 23:28:03.874638 3305 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:28:03.875382 kubelet[3305]: I0706 23:28:03.875348 3305 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:28:03.962776 kubelet[3305]: I0706 23:28:03.962518 3305 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:28:03.979608 kubelet[3305]: I0706 23:28:03.979531 3305 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:28:03.979608 kubelet[3305]: I0706 23:28:03.979583 3305 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:28:03.979608 kubelet[3305]: I0706 23:28:03.979616 3305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:28:03.979608 kubelet[3305]: I0706 23:28:03.979630 3305 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:28:03.979608 kubelet[3305]: E0706 23:28:03.979759 3305 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.077903 3305 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.077937 3305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.077973 3305 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.078249 3305 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.078270 3305 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.078301 3305 policy_none.go:49] "None policy: Start" Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.078319 3305 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:28:04.078364 kubelet[3305]: I0706 23:28:04.078342 3305 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:28:04.080697 kubelet[3305]: I0706 23:28:04.078521 3305 state_mem.go:75] "Updated machine memory state" Jul 6 23:28:04.080697 kubelet[3305]: E0706 23:28:04.079828 3305 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:28:04.090117 kubelet[3305]: E0706 23:28:04.089431 3305 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:28:04.090117 kubelet[3305]: I0706 23:28:04.089928 3305 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:28:04.090339 kubelet[3305]: I0706 23:28:04.089952 3305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:28:04.094362 kubelet[3305]: I0706 23:28:04.093229 3305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:28:04.098476 kubelet[3305]: E0706 23:28:04.098424 3305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:28:04.214553 kubelet[3305]: I0706 23:28:04.213509 3305 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-251" Jul 6 23:28:04.236585 kubelet[3305]: I0706 23:28:04.236528 3305 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-251" Jul 6 23:28:04.236882 kubelet[3305]: I0706 23:28:04.236656 3305 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-251" Jul 6 23:28:04.281586 kubelet[3305]: I0706 23:28:04.281511 3305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-251" Jul 6 23:28:04.283461 kubelet[3305]: I0706 23:28:04.282341 3305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:28:04.283461 kubelet[3305]: I0706 23:28:04.282567 3305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:28:04.333520 kubelet[3305]: I0706 23:28:04.333459 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b72fb8d9b9754d168645413171be4543-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-251\" (UID: \"b72fb8d9b9754d168645413171be4543\") " pod="kube-system/kube-scheduler-ip-172-31-19-251" Jul 6 23:28:04.333679 kubelet[3305]: I0706 23:28:04.333531 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08202881c03dfca88772fb0ac1250cba-ca-certs\") pod \"kube-apiserver-ip-172-31-19-251\" (UID: \"08202881c03dfca88772fb0ac1250cba\") " pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:28:04.333679 kubelet[3305]: I0706 23:28:04.333571 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08202881c03dfca88772fb0ac1250cba-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-251\" (UID: \"08202881c03dfca88772fb0ac1250cba\") " pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:28:04.333679 kubelet[3305]: I0706 23:28:04.333609 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:28:04.333679 kubelet[3305]: I0706 23:28:04.333649 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:28:04.334039 kubelet[3305]: I0706 23:28:04.333953 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:28:04.335000 kubelet[3305]: I0706 23:28:04.334119 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08202881c03dfca88772fb0ac1250cba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-251\" (UID: \"08202881c03dfca88772fb0ac1250cba\") " pod="kube-system/kube-apiserver-ip-172-31-19-251" Jul 6 23:28:04.335000 kubelet[3305]: I0706 23:28:04.334226 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:28:04.335000 kubelet[3305]: I0706 23:28:04.334316 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/138f7cc423bd3b3fd275f9e3c0cc0380-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-251\" (UID: \"138f7cc423bd3b3fd275f9e3c0cc0380\") " pod="kube-system/kube-controller-manager-ip-172-31-19-251" Jul 6 23:28:04.779726 kubelet[3305]: I0706 23:28:04.778668 3305 apiserver.go:52] "Watching apiserver" Jul 6 23:28:04.829634 kubelet[3305]: I0706 23:28:04.829553 3305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:05.033084 kubelet[3305]: I0706 23:28:05.031932 3305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-251" Jul 6 23:28:05.045465 kubelet[3305]: E0706 23:28:05.044815 3305 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-251\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-251" Jul 6 23:28:05.079352 kubelet[3305]: I0706 23:28:05.078428 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-251" podStartSLOduration=1.07840726 podStartE2EDuration="1.07840726s" podCreationTimestamp="2025-07-06 23:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:05.077641144 +0000 UTC m=+1.445604392" watchObservedRunningTime="2025-07-06 23:28:05.07840726 +0000 UTC m=+1.446370508" Jul 6 23:28:05.118164 kubelet[3305]: I0706 23:28:05.118057 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-251" podStartSLOduration=1.118033889 podStartE2EDuration="1.118033889s" podCreationTimestamp="2025-07-06 23:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:05.095921812 +0000 UTC m=+1.463885060" watchObservedRunningTime="2025-07-06 23:28:05.118033889 +0000 UTC m=+1.485997137" Jul 6 23:28:05.137304 kubelet[3305]: I0706 23:28:05.136728 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-251" podStartSLOduration=1.136706309 podStartE2EDuration="1.136706309s" podCreationTimestamp="2025-07-06 23:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:05.119450561 +0000 UTC m=+1.487413821" watchObservedRunningTime="2025-07-06 23:28:05.136706309 +0000 UTC m=+1.504669557" Jul 6 23:28:07.659262 update_engine[1978]: I20250706 23:28:07.658612 1978 update_attempter.cc:509] Updating boot flags... Jul 6 23:28:08.322215 kubelet[3305]: I0706 23:28:08.321887 3305 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:28:08.323767 containerd[2001]: time="2025-07-06T23:28:08.323602184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:28:08.324931 kubelet[3305]: I0706 23:28:08.324673 3305 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:28:09.376684 kubelet[3305]: I0706 23:28:09.374621 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24e1bb92-5dfe-4a76-add3-c534fda49623-xtables-lock\") pod \"kube-proxy-zrdx7\" (UID: \"24e1bb92-5dfe-4a76-add3-c534fda49623\") " pod="kube-system/kube-proxy-zrdx7" Jul 6 23:28:09.376684 kubelet[3305]: I0706 23:28:09.376432 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24e1bb92-5dfe-4a76-add3-c534fda49623-lib-modules\") pod \"kube-proxy-zrdx7\" (UID: \"24e1bb92-5dfe-4a76-add3-c534fda49623\") " pod="kube-system/kube-proxy-zrdx7" Jul 6 23:28:09.376684 kubelet[3305]: I0706 23:28:09.376534 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24e1bb92-5dfe-4a76-add3-c534fda49623-kube-proxy\") pod \"kube-proxy-zrdx7\" (UID: \"24e1bb92-5dfe-4a76-add3-c534fda49623\") " pod="kube-system/kube-proxy-zrdx7" Jul 6 23:28:09.376684 kubelet[3305]: I0706 23:28:09.376604 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xw8m\" (UniqueName: \"kubernetes.io/projected/24e1bb92-5dfe-4a76-add3-c534fda49623-kube-api-access-8xw8m\") pod \"kube-proxy-zrdx7\" (UID: \"24e1bb92-5dfe-4a76-add3-c534fda49623\") " pod="kube-system/kube-proxy-zrdx7" Jul 6 23:28:09.577915 kubelet[3305]: I0706 23:28:09.577829 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9b8df54-96c8-48f9-a253-6c915fba4af6-var-lib-calico\") pod \"tigera-operator-747864d56d-dhxm6\" (UID: \"c9b8df54-96c8-48f9-a253-6c915fba4af6\") " pod="tigera-operator/tigera-operator-747864d56d-dhxm6" Jul 6 23:28:09.578323 kubelet[3305]: I0706 23:28:09.578272 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47l9c\" (UniqueName: \"kubernetes.io/projected/c9b8df54-96c8-48f9-a253-6c915fba4af6-kube-api-access-47l9c\") pod \"tigera-operator-747864d56d-dhxm6\" (UID: \"c9b8df54-96c8-48f9-a253-6c915fba4af6\") " pod="tigera-operator/tigera-operator-747864d56d-dhxm6" Jul 6 23:28:09.579584 systemd[1]: Created slice kubepods-besteffort-pod24e1bb92_5dfe_4a76_add3_c534fda49623.slice - libcontainer container kubepods-besteffort-pod24e1bb92_5dfe_4a76_add3_c534fda49623.slice. Jul 6 23:28:09.599387 containerd[2001]: time="2025-07-06T23:28:09.598909331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zrdx7,Uid:24e1bb92-5dfe-4a76-add3-c534fda49623,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:09.604277 systemd[1]: Created slice kubepods-besteffort-podc9b8df54_96c8_48f9_a253_6c915fba4af6.slice - libcontainer container kubepods-besteffort-podc9b8df54_96c8_48f9_a253_6c915fba4af6.slice. Jul 6 23:28:09.643573 containerd[2001]: time="2025-07-06T23:28:09.643370591Z" level=info msg="connecting to shim 53c0c2686a19d45d396d8ae18c8b8d685c5d7fbd4007756e07f87ccad8df28a1" address="unix:///run/containerd/s/46e6860fe700c6da2d1f6841f66b1c3d7c3ab08acb3226ebcff967610ea62758" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:09.704456 systemd[1]: Started cri-containerd-53c0c2686a19d45d396d8ae18c8b8d685c5d7fbd4007756e07f87ccad8df28a1.scope - libcontainer container 53c0c2686a19d45d396d8ae18c8b8d685c5d7fbd4007756e07f87ccad8df28a1. Jul 6 23:28:09.764824 containerd[2001]: time="2025-07-06T23:28:09.764751684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zrdx7,Uid:24e1bb92-5dfe-4a76-add3-c534fda49623,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c0c2686a19d45d396d8ae18c8b8d685c5d7fbd4007756e07f87ccad8df28a1\"" Jul 6 23:28:09.778002 containerd[2001]: time="2025-07-06T23:28:09.777918936Z" level=info msg="CreateContainer within sandbox \"53c0c2686a19d45d396d8ae18c8b8d685c5d7fbd4007756e07f87ccad8df28a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:28:09.804591 containerd[2001]: time="2025-07-06T23:28:09.804517980Z" level=info msg="Container c148676079f10b3c6f4c9071303a2ccab6346e646c2b56820949d7a7aa2fe14e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:09.824947 containerd[2001]: time="2025-07-06T23:28:09.824854944Z" level=info msg="CreateContainer within sandbox \"53c0c2686a19d45d396d8ae18c8b8d685c5d7fbd4007756e07f87ccad8df28a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c148676079f10b3c6f4c9071303a2ccab6346e646c2b56820949d7a7aa2fe14e\"" Jul 6 23:28:09.826367 containerd[2001]: time="2025-07-06T23:28:09.826303332Z" level=info msg="StartContainer for \"c148676079f10b3c6f4c9071303a2ccab6346e646c2b56820949d7a7aa2fe14e\"" Jul 6 23:28:09.831702 containerd[2001]: time="2025-07-06T23:28:09.831636708Z" level=info msg="connecting to shim c148676079f10b3c6f4c9071303a2ccab6346e646c2b56820949d7a7aa2fe14e" address="unix:///run/containerd/s/46e6860fe700c6da2d1f6841f66b1c3d7c3ab08acb3226ebcff967610ea62758" protocol=ttrpc version=3 Jul 6 23:28:09.871519 systemd[1]: Started cri-containerd-c148676079f10b3c6f4c9071303a2ccab6346e646c2b56820949d7a7aa2fe14e.scope - libcontainer container c148676079f10b3c6f4c9071303a2ccab6346e646c2b56820949d7a7aa2fe14e. Jul 6 23:28:09.913677 containerd[2001]: time="2025-07-06T23:28:09.913603308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-dhxm6,Uid:c9b8df54-96c8-48f9-a253-6c915fba4af6,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:28:09.961487 containerd[2001]: time="2025-07-06T23:28:09.961132657Z" level=info msg="connecting to shim d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c" address="unix:///run/containerd/s/4deb1c5ea844bcd73b15fd307afd95dcee210a619eb16d0cde4db5da3c59a5fd" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:09.983732 containerd[2001]: time="2025-07-06T23:28:09.983337889Z" level=info msg="StartContainer for \"c148676079f10b3c6f4c9071303a2ccab6346e646c2b56820949d7a7aa2fe14e\" returns successfully" Jul 6 23:28:10.041879 systemd[1]: Started cri-containerd-d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c.scope - libcontainer container d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c. Jul 6 23:28:10.195347 containerd[2001]: time="2025-07-06T23:28:10.193804906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-dhxm6,Uid:c9b8df54-96c8-48f9-a253-6c915fba4af6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c\"" Jul 6 23:28:10.203773 containerd[2001]: time="2025-07-06T23:28:10.203705950Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:28:11.558649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536578718.mount: Deactivated successfully. Jul 6 23:28:12.309807 containerd[2001]: time="2025-07-06T23:28:12.309713544Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:12.312972 containerd[2001]: time="2025-07-06T23:28:12.312854616Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:28:12.315687 containerd[2001]: time="2025-07-06T23:28:12.315574596Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:12.321031 containerd[2001]: time="2025-07-06T23:28:12.320904228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:12.322865 containerd[2001]: time="2025-07-06T23:28:12.322652952Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.11865911s" Jul 6 23:28:12.322865 containerd[2001]: time="2025-07-06T23:28:12.322716480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:28:12.335139 containerd[2001]: time="2025-07-06T23:28:12.335036580Z" level=info msg="CreateContainer within sandbox \"d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:28:12.358549 containerd[2001]: time="2025-07-06T23:28:12.358474525Z" level=info msg="Container bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:12.379539 containerd[2001]: time="2025-07-06T23:28:12.379434913Z" level=info msg="CreateContainer within sandbox \"d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\"" Jul 6 23:28:12.380751 containerd[2001]: time="2025-07-06T23:28:12.380483677Z" level=info msg="StartContainer for \"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\"" Jul 6 23:28:12.384555 containerd[2001]: time="2025-07-06T23:28:12.384337225Z" level=info msg="connecting to shim bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98" address="unix:///run/containerd/s/4deb1c5ea844bcd73b15fd307afd95dcee210a619eb16d0cde4db5da3c59a5fd" protocol=ttrpc version=3 Jul 6 23:28:12.447559 systemd[1]: Started cri-containerd-bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98.scope - libcontainer container bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98. Jul 6 23:28:12.512395 containerd[2001]: time="2025-07-06T23:28:12.512334097Z" level=info msg="StartContainer for \"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\" returns successfully" Jul 6 23:28:12.690130 kubelet[3305]: I0706 23:28:12.690010 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zrdx7" podStartSLOduration=3.689987066 podStartE2EDuration="3.689987066s" podCreationTimestamp="2025-07-06 23:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:10.089688909 +0000 UTC m=+6.457652169" watchObservedRunningTime="2025-07-06 23:28:12.689987066 +0000 UTC m=+9.057950290" Jul 6 23:28:14.064735 kubelet[3305]: I0706 23:28:14.064517 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-dhxm6" podStartSLOduration=2.939771087 podStartE2EDuration="5.064495369s" podCreationTimestamp="2025-07-06 23:28:09 +0000 UTC" firstStartedPulling="2025-07-06 23:28:10.200850022 +0000 UTC m=+6.568813246" lastFinishedPulling="2025-07-06 23:28:12.325574292 +0000 UTC m=+8.693537528" observedRunningTime="2025-07-06 23:28:13.122575524 +0000 UTC m=+9.490538760" watchObservedRunningTime="2025-07-06 23:28:14.064495369 +0000 UTC m=+10.432458605" Jul 6 23:28:19.501399 sudo[2346]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:19.526204 sshd[2345]: Connection closed by 139.178.89.65 port 59036 Jul 6 23:28:19.526613 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:19.535951 systemd[1]: sshd@6-172.31.19.251:22-139.178.89.65:59036.service: Deactivated successfully. Jul 6 23:28:19.551065 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:28:19.553656 systemd[1]: session-7.scope: Consumed 11.776s CPU time, 231.7M memory peak. Jul 6 23:28:19.558922 systemd-logind[1977]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:28:19.563733 systemd-logind[1977]: Removed session 7. Jul 6 23:28:32.599070 systemd[1]: Created slice kubepods-besteffort-pod6fb53c24_4c33_4acf_a63c_fc3e69fd4fa0.slice - libcontainer container kubepods-besteffort-pod6fb53c24_4c33_4acf_a63c_fc3e69fd4fa0.slice. Jul 6 23:28:32.646498 kubelet[3305]: I0706 23:28:32.646412 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0-tigera-ca-bundle\") pod \"calico-typha-8679675f4c-xsgfn\" (UID: \"6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0\") " pod="calico-system/calico-typha-8679675f4c-xsgfn" Jul 6 23:28:32.647116 kubelet[3305]: I0706 23:28:32.646513 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zv2\" (UniqueName: \"kubernetes.io/projected/6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0-kube-api-access-r4zv2\") pod \"calico-typha-8679675f4c-xsgfn\" (UID: \"6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0\") " pod="calico-system/calico-typha-8679675f4c-xsgfn" Jul 6 23:28:32.647116 kubelet[3305]: I0706 23:28:32.646557 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0-typha-certs\") pod \"calico-typha-8679675f4c-xsgfn\" (UID: \"6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0\") " pod="calico-system/calico-typha-8679675f4c-xsgfn" Jul 6 23:28:32.913738 containerd[2001]: time="2025-07-06T23:28:32.912680303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8679675f4c-xsgfn,Uid:6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:32.987892 containerd[2001]: time="2025-07-06T23:28:32.986903387Z" level=info msg="connecting to shim d90d5634e638b6e3933f8f75b5bdbcb593950da12918584444c7d66a16b9618a" address="unix:///run/containerd/s/8de477da3c45f2e45f0b8eb37ff28e58d12e14c851cf176efc31e720ffe32e4d" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:33.072557 systemd[1]: Created slice kubepods-besteffort-pode949bbb9_9b42_48f2_8f4e_6c9a69d2ca56.slice - libcontainer container kubepods-besteffort-pode949bbb9_9b42_48f2_8f4e_6c9a69d2ca56.slice. Jul 6 23:28:33.115934 systemd[1]: Started cri-containerd-d90d5634e638b6e3933f8f75b5bdbcb593950da12918584444c7d66a16b9618a.scope - libcontainer container d90d5634e638b6e3933f8f75b5bdbcb593950da12918584444c7d66a16b9618a. Jul 6 23:28:33.151677 kubelet[3305]: I0706 23:28:33.151598 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-cni-net-dir\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.151677 kubelet[3305]: I0706 23:28:33.151676 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-policysync\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.151894 kubelet[3305]: I0706 23:28:33.151722 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-cni-log-dir\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.151894 kubelet[3305]: I0706 23:28:33.151772 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-cni-bin-dir\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.151894 kubelet[3305]: I0706 23:28:33.151812 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-var-run-calico\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.151894 kubelet[3305]: I0706 23:28:33.151853 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-lib-modules\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.151894 kubelet[3305]: I0706 23:28:33.151888 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-node-certs\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.152216 kubelet[3305]: I0706 23:28:33.151923 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-tigera-ca-bundle\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.152216 kubelet[3305]: I0706 23:28:33.151972 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-flexvol-driver-host\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.152216 kubelet[3305]: I0706 23:28:33.152007 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-xtables-lock\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.152216 kubelet[3305]: I0706 23:28:33.152043 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgxtc\" (UniqueName: \"kubernetes.io/projected/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-kube-api-access-mgxtc\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.152216 kubelet[3305]: I0706 23:28:33.152111 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56-var-lib-calico\") pod \"calico-node-4ql6n\" (UID: \"e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56\") " pod="calico-system/calico-node-4ql6n" Jul 6 23:28:33.255430 kubelet[3305]: E0706 23:28:33.254320 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.255430 kubelet[3305]: W0706 23:28:33.254519 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.256249 kubelet[3305]: E0706 23:28:33.255919 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.257858 kubelet[3305]: E0706 23:28:33.257551 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.257858 kubelet[3305]: W0706 23:28:33.257809 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.258516 kubelet[3305]: E0706 23:28:33.258361 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.259674 kubelet[3305]: E0706 23:28:33.259518 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.260065 kubelet[3305]: W0706 23:28:33.259872 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.260376 kubelet[3305]: E0706 23:28:33.260002 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.262245 kubelet[3305]: E0706 23:28:33.262057 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.262245 kubelet[3305]: W0706 23:28:33.262127 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.262648 kubelet[3305]: E0706 23:28:33.262163 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.263624 kubelet[3305]: E0706 23:28:33.263246 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.263624 kubelet[3305]: W0706 23:28:33.263281 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.263624 kubelet[3305]: E0706 23:28:33.263315 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.265529 kubelet[3305]: E0706 23:28:33.265463 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.265752 kubelet[3305]: W0706 23:28:33.265717 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.266003 kubelet[3305]: E0706 23:28:33.265814 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.267008 kubelet[3305]: E0706 23:28:33.266714 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.267008 kubelet[3305]: W0706 23:28:33.266750 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.267008 kubelet[3305]: E0706 23:28:33.266787 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.268754 kubelet[3305]: E0706 23:28:33.268500 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.269389 kubelet[3305]: W0706 23:28:33.268946 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.269389 kubelet[3305]: E0706 23:28:33.268996 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.271345 kubelet[3305]: E0706 23:28:33.271299 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.271650 kubelet[3305]: W0706 23:28:33.271608 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.272049 kubelet[3305]: E0706 23:28:33.271788 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.272404 kubelet[3305]: E0706 23:28:33.272352 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.272599 kubelet[3305]: W0706 23:28:33.272563 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.272864 kubelet[3305]: E0706 23:28:33.272827 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.274256 kubelet[3305]: E0706 23:28:33.273670 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.274256 kubelet[3305]: W0706 23:28:33.273704 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.274256 kubelet[3305]: E0706 23:28:33.273736 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.275165 kubelet[3305]: E0706 23:28:33.275109 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.276320 kubelet[3305]: W0706 23:28:33.276268 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.276764 kubelet[3305]: E0706 23:28:33.276507 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.277473 kubelet[3305]: E0706 23:28:33.277434 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.277892 kubelet[3305]: W0706 23:28:33.277854 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.278257 kubelet[3305]: E0706 23:28:33.278013 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.278897 kubelet[3305]: E0706 23:28:33.278727 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.279101 kubelet[3305]: W0706 23:28:33.279065 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.281341 kubelet[3305]: E0706 23:28:33.281273 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.282516 kubelet[3305]: E0706 23:28:33.282163 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.282516 kubelet[3305]: W0706 23:28:33.282245 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.282516 kubelet[3305]: E0706 23:28:33.282280 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.284507 kubelet[3305]: E0706 23:28:33.284466 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.285550 kubelet[3305]: W0706 23:28:33.285268 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.285550 kubelet[3305]: E0706 23:28:33.285324 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.286137 kubelet[3305]: E0706 23:28:33.285906 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.287235 kubelet[3305]: W0706 23:28:33.286373 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.287235 kubelet[3305]: E0706 23:28:33.286426 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.289535 kubelet[3305]: E0706 23:28:33.288465 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.289535 kubelet[3305]: W0706 23:28:33.289264 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.289535 kubelet[3305]: E0706 23:28:33.289312 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.304349 kubelet[3305]: E0706 23:28:33.304284 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.304349 kubelet[3305]: W0706 23:28:33.304332 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.304578 kubelet[3305]: E0706 23:28:33.304371 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.307768 kubelet[3305]: E0706 23:28:33.307709 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.307768 kubelet[3305]: W0706 23:28:33.307752 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.307957 kubelet[3305]: E0706 23:28:33.307788 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.311586 kubelet[3305]: E0706 23:28:33.311259 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.311586 kubelet[3305]: W0706 23:28:33.311349 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.311586 kubelet[3305]: E0706 23:28:33.311386 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.313327 kubelet[3305]: E0706 23:28:33.313271 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.313327 kubelet[3305]: W0706 23:28:33.313311 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.313527 kubelet[3305]: E0706 23:28:33.313345 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.317685 kubelet[3305]: E0706 23:28:33.316424 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.317685 kubelet[3305]: W0706 23:28:33.316468 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.317685 kubelet[3305]: E0706 23:28:33.316505 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.319712 kubelet[3305]: E0706 23:28:33.319657 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.319712 kubelet[3305]: W0706 23:28:33.319697 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.319920 kubelet[3305]: E0706 23:28:33.319767 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.322174 kubelet[3305]: E0706 23:28:33.322073 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.322174 kubelet[3305]: W0706 23:28:33.322115 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.322174 kubelet[3305]: E0706 23:28:33.322152 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.325078 kubelet[3305]: E0706 23:28:33.325018 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.325078 kubelet[3305]: W0706 23:28:33.325060 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.325395 kubelet[3305]: E0706 23:28:33.325095 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.326439 kubelet[3305]: E0706 23:28:33.326380 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.326439 kubelet[3305]: W0706 23:28:33.326425 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.326654 kubelet[3305]: E0706 23:28:33.326461 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.327551 kubelet[3305]: E0706 23:28:33.327491 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.327795 kubelet[3305]: W0706 23:28:33.327747 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.327905 kubelet[3305]: E0706 23:28:33.327800 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.329581 kubelet[3305]: E0706 23:28:33.329524 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.329581 kubelet[3305]: W0706 23:28:33.329567 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.329780 kubelet[3305]: E0706 23:28:33.329602 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.331360 kubelet[3305]: E0706 23:28:33.331275 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.331360 kubelet[3305]: W0706 23:28:33.331329 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.331360 kubelet[3305]: E0706 23:28:33.331368 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.334763 kubelet[3305]: E0706 23:28:33.334316 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.334763 kubelet[3305]: W0706 23:28:33.334358 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.334763 kubelet[3305]: E0706 23:28:33.334392 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.335009 kubelet[3305]: E0706 23:28:33.334857 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.335009 kubelet[3305]: W0706 23:28:33.334881 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.335009 kubelet[3305]: E0706 23:28:33.334910 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.336881 kubelet[3305]: E0706 23:28:33.336826 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.336881 kubelet[3305]: W0706 23:28:33.336866 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.337095 kubelet[3305]: E0706 23:28:33.336900 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.339272 kubelet[3305]: E0706 23:28:33.339214 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.339272 kubelet[3305]: W0706 23:28:33.339259 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.339475 kubelet[3305]: E0706 23:28:33.339294 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.339885 kubelet[3305]: E0706 23:28:33.339831 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.339885 kubelet[3305]: W0706 23:28:33.339870 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.340071 kubelet[3305]: E0706 23:28:33.339904 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.341708 kubelet[3305]: E0706 23:28:33.341651 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.341708 kubelet[3305]: W0706 23:28:33.341693 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.341917 kubelet[3305]: E0706 23:28:33.341728 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.343991 kubelet[3305]: E0706 23:28:33.343915 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.343991 kubelet[3305]: W0706 23:28:33.343962 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.344253 kubelet[3305]: E0706 23:28:33.343999 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.345040 kubelet[3305]: E0706 23:28:33.344984 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.345040 kubelet[3305]: W0706 23:28:33.345025 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.345282 kubelet[3305]: E0706 23:28:33.345059 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.346653 kubelet[3305]: E0706 23:28:33.346601 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.346924 kubelet[3305]: W0706 23:28:33.346644 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.346924 kubelet[3305]: E0706 23:28:33.346700 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.348741 kubelet[3305]: E0706 23:28:33.348671 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.348741 kubelet[3305]: W0706 23:28:33.348723 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.349413 kubelet[3305]: E0706 23:28:33.348761 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.350147 kubelet[3305]: E0706 23:28:33.350079 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.350147 kubelet[3305]: W0706 23:28:33.350130 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.351033 kubelet[3305]: E0706 23:28:33.350166 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.352288 kubelet[3305]: E0706 23:28:33.352155 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.352288 kubelet[3305]: W0706 23:28:33.352242 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.352288 kubelet[3305]: E0706 23:28:33.352279 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.354067 kubelet[3305]: E0706 23:28:33.353623 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.354310 kubelet[3305]: W0706 23:28:33.354070 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.354310 kubelet[3305]: E0706 23:28:33.354111 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.390244 containerd[2001]: time="2025-07-06T23:28:33.389725209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4ql6n,Uid:e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:33.462413 containerd[2001]: time="2025-07-06T23:28:33.462011493Z" level=info msg="connecting to shim 8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966" address="unix:///run/containerd/s/9b4a0d28260425e9267ec285a140d6ee24ae617a142f89511d96a41dc022aefb" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:33.534713 systemd[1]: Started cri-containerd-8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966.scope - libcontainer container 8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966. Jul 6 23:28:33.575131 kubelet[3305]: E0706 23:28:33.575069 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nmvf" podUID="94edf3c8-6c21-47d9-9a93-a87b343dac2f" Jul 6 23:28:33.620731 kubelet[3305]: E0706 23:28:33.620538 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.621395 kubelet[3305]: W0706 23:28:33.620703 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.621395 kubelet[3305]: E0706 23:28:33.621092 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.622838 kubelet[3305]: E0706 23:28:33.622758 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.623024 kubelet[3305]: W0706 23:28:33.622827 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.623024 kubelet[3305]: E0706 23:28:33.622936 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.623894 kubelet[3305]: E0706 23:28:33.623567 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.623894 kubelet[3305]: W0706 23:28:33.623624 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.623894 kubelet[3305]: E0706 23:28:33.623654 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.625350 kubelet[3305]: E0706 23:28:33.625293 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.625765 kubelet[3305]: W0706 23:28:33.625359 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.625765 kubelet[3305]: E0706 23:28:33.625397 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.626485 kubelet[3305]: E0706 23:28:33.626020 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.626485 kubelet[3305]: W0706 23:28:33.626054 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.626485 kubelet[3305]: E0706 23:28:33.626086 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.626725 kubelet[3305]: E0706 23:28:33.626532 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.626725 kubelet[3305]: W0706 23:28:33.626556 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.626725 kubelet[3305]: E0706 23:28:33.626583 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.627348 kubelet[3305]: E0706 23:28:33.626895 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.627348 kubelet[3305]: W0706 23:28:33.626930 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.627348 kubelet[3305]: E0706 23:28:33.626959 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.627348 kubelet[3305]: E0706 23:28:33.627384 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.628507 kubelet[3305]: W0706 23:28:33.627410 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.628507 kubelet[3305]: E0706 23:28:33.627440 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.628507 kubelet[3305]: E0706 23:28:33.628347 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.628507 kubelet[3305]: W0706 23:28:33.628383 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.628507 kubelet[3305]: E0706 23:28:33.628415 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.629356 kubelet[3305]: E0706 23:28:33.628862 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.629356 kubelet[3305]: W0706 23:28:33.628885 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.629356 kubelet[3305]: E0706 23:28:33.628913 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.629356 kubelet[3305]: E0706 23:28:33.629270 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.629356 kubelet[3305]: W0706 23:28:33.629296 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.629356 kubelet[3305]: E0706 23:28:33.629321 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.630326 kubelet[3305]: E0706 23:28:33.629869 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.630326 kubelet[3305]: W0706 23:28:33.629899 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.630326 kubelet[3305]: E0706 23:28:33.629946 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.631520 kubelet[3305]: E0706 23:28:33.631460 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.631520 kubelet[3305]: W0706 23:28:33.631504 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.631739 kubelet[3305]: E0706 23:28:33.631538 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.632820 kubelet[3305]: E0706 23:28:33.631965 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.632820 kubelet[3305]: W0706 23:28:33.632000 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.632820 kubelet[3305]: E0706 23:28:33.632036 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.632820 kubelet[3305]: E0706 23:28:33.632582 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.632820 kubelet[3305]: W0706 23:28:33.632610 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.632820 kubelet[3305]: E0706 23:28:33.632641 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.634941 kubelet[3305]: E0706 23:28:33.634906 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.634941 kubelet[3305]: W0706 23:28:33.634934 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.635674 kubelet[3305]: E0706 23:28:33.634967 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.635674 kubelet[3305]: E0706 23:28:33.635522 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.635674 kubelet[3305]: W0706 23:28:33.635549 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.635674 kubelet[3305]: E0706 23:28:33.635578 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.635674 kubelet[3305]: E0706 23:28:33.635952 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.635674 kubelet[3305]: W0706 23:28:33.635975 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.635674 kubelet[3305]: E0706 23:28:33.636000 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.637534 kubelet[3305]: E0706 23:28:33.636405 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.637534 kubelet[3305]: W0706 23:28:33.636429 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.637534 kubelet[3305]: E0706 23:28:33.636457 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.637534 kubelet[3305]: E0706 23:28:33.636941 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.637534 kubelet[3305]: W0706 23:28:33.636973 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.637534 kubelet[3305]: E0706 23:28:33.637002 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.639273 kubelet[3305]: E0706 23:28:33.637580 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.639273 kubelet[3305]: W0706 23:28:33.637604 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.639273 kubelet[3305]: E0706 23:28:33.637630 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.639273 kubelet[3305]: I0706 23:28:33.637703 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94edf3c8-6c21-47d9-9a93-a87b343dac2f-registration-dir\") pod \"csi-node-driver-5nmvf\" (UID: \"94edf3c8-6c21-47d9-9a93-a87b343dac2f\") " pod="calico-system/csi-node-driver-5nmvf" Jul 6 23:28:33.639273 kubelet[3305]: E0706 23:28:33.638274 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.639273 kubelet[3305]: W0706 23:28:33.638301 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.639273 kubelet[3305]: E0706 23:28:33.638333 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.639273 kubelet[3305]: E0706 23:28:33.638764 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.639273 kubelet[3305]: W0706 23:28:33.638787 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.640732 kubelet[3305]: E0706 23:28:33.638813 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.640732 kubelet[3305]: E0706 23:28:33.639241 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.640732 kubelet[3305]: W0706 23:28:33.639265 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.640732 kubelet[3305]: E0706 23:28:33.639291 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.640732 kubelet[3305]: I0706 23:28:33.639344 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5nbz\" (UniqueName: \"kubernetes.io/projected/94edf3c8-6c21-47d9-9a93-a87b343dac2f-kube-api-access-r5nbz\") pod \"csi-node-driver-5nmvf\" (UID: \"94edf3c8-6c21-47d9-9a93-a87b343dac2f\") " pod="calico-system/csi-node-driver-5nmvf" Jul 6 23:28:33.640732 kubelet[3305]: E0706 23:28:33.639800 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.640732 kubelet[3305]: W0706 23:28:33.639829 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.640732 kubelet[3305]: E0706 23:28:33.639859 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.641110 kubelet[3305]: I0706 23:28:33.639921 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/94edf3c8-6c21-47d9-9a93-a87b343dac2f-varrun\") pod \"csi-node-driver-5nmvf\" (UID: \"94edf3c8-6c21-47d9-9a93-a87b343dac2f\") " pod="calico-system/csi-node-driver-5nmvf" Jul 6 23:28:33.643011 kubelet[3305]: E0706 23:28:33.642342 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.643011 kubelet[3305]: W0706 23:28:33.642383 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.643011 kubelet[3305]: E0706 23:28:33.642416 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.644779 kubelet[3305]: E0706 23:28:33.644047 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.644779 kubelet[3305]: W0706 23:28:33.644082 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.646420 kubelet[3305]: E0706 23:28:33.644113 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.646748 kubelet[3305]: E0706 23:28:33.646712 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.649341 kubelet[3305]: W0706 23:28:33.647315 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.651239 kubelet[3305]: E0706 23:28:33.647362 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.651239 kubelet[3305]: I0706 23:28:33.650468 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94edf3c8-6c21-47d9-9a93-a87b343dac2f-socket-dir\") pod \"csi-node-driver-5nmvf\" (UID: \"94edf3c8-6c21-47d9-9a93-a87b343dac2f\") " pod="calico-system/csi-node-driver-5nmvf" Jul 6 23:28:33.653291 kubelet[3305]: E0706 23:28:33.653161 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.653291 kubelet[3305]: W0706 23:28:33.653251 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.653291 kubelet[3305]: E0706 23:28:33.653289 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.656212 kubelet[3305]: E0706 23:28:33.656146 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.656212 kubelet[3305]: W0706 23:28:33.656207 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.656431 kubelet[3305]: E0706 23:28:33.656243 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.659017 kubelet[3305]: E0706 23:28:33.658843 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.659017 kubelet[3305]: W0706 23:28:33.658889 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.659017 kubelet[3305]: E0706 23:28:33.658925 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.660063 kubelet[3305]: I0706 23:28:33.659940 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94edf3c8-6c21-47d9-9a93-a87b343dac2f-kubelet-dir\") pod \"csi-node-driver-5nmvf\" (UID: \"94edf3c8-6c21-47d9-9a93-a87b343dac2f\") " pod="calico-system/csi-node-driver-5nmvf" Jul 6 23:28:33.661224 kubelet[3305]: E0706 23:28:33.661079 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.661224 kubelet[3305]: W0706 23:28:33.661125 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.661224 kubelet[3305]: E0706 23:28:33.661162 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.664055 kubelet[3305]: E0706 23:28:33.663982 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.664055 kubelet[3305]: W0706 23:28:33.664035 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.664283 kubelet[3305]: E0706 23:28:33.664073 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.666669 kubelet[3305]: E0706 23:28:33.666608 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.666669 kubelet[3305]: W0706 23:28:33.666651 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.666878 kubelet[3305]: E0706 23:28:33.666687 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.670218 kubelet[3305]: E0706 23:28:33.670107 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.674236 kubelet[3305]: W0706 23:28:33.673165 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.674236 kubelet[3305]: E0706 23:28:33.673463 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.746055 containerd[2001]: time="2025-07-06T23:28:33.745970327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8679675f4c-xsgfn,Uid:6fb53c24-4c33-4acf-a63c-fc3e69fd4fa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d90d5634e638b6e3933f8f75b5bdbcb593950da12918584444c7d66a16b9618a\"" Jul 6 23:28:33.752824 containerd[2001]: time="2025-07-06T23:28:33.752751011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:28:33.774439 kubelet[3305]: E0706 23:28:33.774382 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.774439 kubelet[3305]: W0706 23:28:33.774430 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.774636 kubelet[3305]: E0706 23:28:33.774465 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.785705 kubelet[3305]: E0706 23:28:33.785380 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.787090 kubelet[3305]: W0706 23:28:33.785419 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.787922 kubelet[3305]: E0706 23:28:33.787044 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.795307 kubelet[3305]: E0706 23:28:33.795147 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.797335 kubelet[3305]: W0706 23:28:33.795741 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.797817 kubelet[3305]: E0706 23:28:33.797757 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.814169 kubelet[3305]: E0706 23:28:33.814117 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.816525 kubelet[3305]: W0706 23:28:33.815755 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.818912 kubelet[3305]: E0706 23:28:33.818806 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.829277 kubelet[3305]: E0706 23:28:33.826396 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.829277 kubelet[3305]: W0706 23:28:33.826440 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.829277 kubelet[3305]: E0706 23:28:33.826741 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.829840 kubelet[3305]: E0706 23:28:33.829791 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.830057 kubelet[3305]: W0706 23:28:33.829963 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.831158 kubelet[3305]: E0706 23:28:33.830013 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.836815 kubelet[3305]: E0706 23:28:33.836352 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.836815 kubelet[3305]: W0706 23:28:33.836393 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.836815 kubelet[3305]: E0706 23:28:33.836456 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.837123 kubelet[3305]: E0706 23:28:33.836989 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.837123 kubelet[3305]: W0706 23:28:33.837040 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.837123 kubelet[3305]: E0706 23:28:33.837069 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.840695 kubelet[3305]: E0706 23:28:33.840387 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.840695 kubelet[3305]: W0706 23:28:33.840428 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.840695 kubelet[3305]: E0706 23:28:33.840462 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.842540 kubelet[3305]: E0706 23:28:33.841166 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.842540 kubelet[3305]: W0706 23:28:33.842239 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.842540 kubelet[3305]: E0706 23:28:33.842293 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.844367 kubelet[3305]: E0706 23:28:33.844328 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.845422 kubelet[3305]: W0706 23:28:33.845376 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.845661 kubelet[3305]: E0706 23:28:33.845632 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.847853 kubelet[3305]: E0706 23:28:33.847397 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.847853 kubelet[3305]: W0706 23:28:33.847433 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.847853 kubelet[3305]: E0706 23:28:33.847465 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.849973 kubelet[3305]: E0706 23:28:33.849604 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.851347 kubelet[3305]: W0706 23:28:33.851292 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.851617 kubelet[3305]: E0706 23:28:33.851587 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.853474 kubelet[3305]: E0706 23:28:33.853371 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.853474 kubelet[3305]: W0706 23:28:33.853407 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.853474 kubelet[3305]: E0706 23:28:33.853439 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.855132 kubelet[3305]: E0706 23:28:33.855090 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.856710 kubelet[3305]: W0706 23:28:33.856281 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.856710 kubelet[3305]: E0706 23:28:33.856330 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.857123 kubelet[3305]: E0706 23:28:33.857090 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.857976 kubelet[3305]: W0706 23:28:33.857903 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.858164 kubelet[3305]: E0706 23:28:33.858132 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.860701 kubelet[3305]: E0706 23:28:33.860544 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.861334 kubelet[3305]: W0706 23:28:33.861268 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.863282 kubelet[3305]: E0706 23:28:33.861733 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.866445 kubelet[3305]: E0706 23:28:33.866402 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.867991 kubelet[3305]: W0706 23:28:33.867819 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.868353 kubelet[3305]: E0706 23:28:33.868302 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.871564 kubelet[3305]: E0706 23:28:33.871485 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.871728 containerd[2001]: time="2025-07-06T23:28:33.871623191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4ql6n,Uid:e949bbb9-9b42-48f2-8f4e-6c9a69d2ca56,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966\"" Jul 6 23:28:33.871931 kubelet[3305]: W0706 23:28:33.871525 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.871931 kubelet[3305]: E0706 23:28:33.871856 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.876515 kubelet[3305]: E0706 23:28:33.875992 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.876515 kubelet[3305]: W0706 23:28:33.876025 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.876515 kubelet[3305]: E0706 23:28:33.876057 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.879435 kubelet[3305]: E0706 23:28:33.879393 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.879692 kubelet[3305]: W0706 23:28:33.879657 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.879991 kubelet[3305]: E0706 23:28:33.879872 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.881346 kubelet[3305]: E0706 23:28:33.881027 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.881346 kubelet[3305]: W0706 23:28:33.881065 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.881346 kubelet[3305]: E0706 23:28:33.881096 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.883385 kubelet[3305]: E0706 23:28:33.883273 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.883385 kubelet[3305]: W0706 23:28:33.883315 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.883385 kubelet[3305]: E0706 23:28:33.883347 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.886673 kubelet[3305]: E0706 23:28:33.886623 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.887722 kubelet[3305]: W0706 23:28:33.886864 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.887722 kubelet[3305]: E0706 23:28:33.886909 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.888873 kubelet[3305]: E0706 23:28:33.888831 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.889328 kubelet[3305]: W0706 23:28:33.889282 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.891226 kubelet[3305]: E0706 23:28:33.889547 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:33.930653 kubelet[3305]: E0706 23:28:33.930503 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:33.930653 kubelet[3305]: W0706 23:28:33.930545 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:33.930653 kubelet[3305]: E0706 23:28:33.930580 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:34.983613 kubelet[3305]: E0706 23:28:34.981684 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nmvf" podUID="94edf3c8-6c21-47d9-9a93-a87b343dac2f" Jul 6 23:28:35.082130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1597167521.mount: Deactivated successfully. Jul 6 23:28:36.650084 containerd[2001]: time="2025-07-06T23:28:36.649835509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:36.652809 containerd[2001]: time="2025-07-06T23:28:36.652692361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:28:36.656346 containerd[2001]: time="2025-07-06T23:28:36.656255293Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:36.663232 containerd[2001]: time="2025-07-06T23:28:36.662860021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:36.668420 containerd[2001]: time="2025-07-06T23:28:36.668160805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.915332754s" Jul 6 23:28:36.668420 containerd[2001]: time="2025-07-06T23:28:36.668267209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:28:36.673477 containerd[2001]: time="2025-07-06T23:28:36.673410169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:28:36.721318 containerd[2001]: time="2025-07-06T23:28:36.721238414Z" level=info msg="CreateContainer within sandbox \"d90d5634e638b6e3933f8f75b5bdbcb593950da12918584444c7d66a16b9618a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:28:36.741576 containerd[2001]: time="2025-07-06T23:28:36.740471954Z" level=info msg="Container f56cb3ffd410b3da3bf45ba9a4cccb50e830adfa4a272d6482e8c9a78e3e1e7c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:36.764175 containerd[2001]: time="2025-07-06T23:28:36.764092478Z" level=info msg="CreateContainer within sandbox \"d90d5634e638b6e3933f8f75b5bdbcb593950da12918584444c7d66a16b9618a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f56cb3ffd410b3da3bf45ba9a4cccb50e830adfa4a272d6482e8c9a78e3e1e7c\"" Jul 6 23:28:36.766804 containerd[2001]: time="2025-07-06T23:28:36.766731494Z" level=info msg="StartContainer for \"f56cb3ffd410b3da3bf45ba9a4cccb50e830adfa4a272d6482e8c9a78e3e1e7c\"" Jul 6 23:28:36.769746 containerd[2001]: time="2025-07-06T23:28:36.769655678Z" level=info msg="connecting to shim f56cb3ffd410b3da3bf45ba9a4cccb50e830adfa4a272d6482e8c9a78e3e1e7c" address="unix:///run/containerd/s/8de477da3c45f2e45f0b8eb37ff28e58d12e14c851cf176efc31e720ffe32e4d" protocol=ttrpc version=3 Jul 6 23:28:36.816524 systemd[1]: Started cri-containerd-f56cb3ffd410b3da3bf45ba9a4cccb50e830adfa4a272d6482e8c9a78e3e1e7c.scope - libcontainer container f56cb3ffd410b3da3bf45ba9a4cccb50e830adfa4a272d6482e8c9a78e3e1e7c. Jul 6 23:28:36.926609 containerd[2001]: time="2025-07-06T23:28:36.926525307Z" level=info msg="StartContainer for \"f56cb3ffd410b3da3bf45ba9a4cccb50e830adfa4a272d6482e8c9a78e3e1e7c\" returns successfully" Jul 6 23:28:36.981446 kubelet[3305]: E0706 23:28:36.981383 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nmvf" podUID="94edf3c8-6c21-47d9-9a93-a87b343dac2f" Jul 6 23:28:37.277333 kubelet[3305]: E0706 23:28:37.276120 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.277333 kubelet[3305]: W0706 23:28:37.276285 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.277333 kubelet[3305]: E0706 23:28:37.276323 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.278205 kubelet[3305]: E0706 23:28:37.278141 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.278451 kubelet[3305]: W0706 23:28:37.278360 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.278629 kubelet[3305]: E0706 23:28:37.278557 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.279324 kubelet[3305]: E0706 23:28:37.279280 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.279600 kubelet[3305]: W0706 23:28:37.279430 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.279820 kubelet[3305]: E0706 23:28:37.279467 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.282233 kubelet[3305]: E0706 23:28:37.281044 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.282233 kubelet[3305]: W0706 23:28:37.281110 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.282233 kubelet[3305]: E0706 23:28:37.281144 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.283135 kubelet[3305]: E0706 23:28:37.283085 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.283448 kubelet[3305]: W0706 23:28:37.283308 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.283448 kubelet[3305]: E0706 23:28:37.283350 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.283959 kubelet[3305]: E0706 23:28:37.283929 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.284206 kubelet[3305]: W0706 23:28:37.284092 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.284206 kubelet[3305]: E0706 23:28:37.284130 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.284791 kubelet[3305]: E0706 23:28:37.284755 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.285224 kubelet[3305]: W0706 23:28:37.284971 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.285224 kubelet[3305]: E0706 23:28:37.285066 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.286741 kubelet[3305]: E0706 23:28:37.286320 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.286741 kubelet[3305]: W0706 23:28:37.286358 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.286741 kubelet[3305]: E0706 23:28:37.286390 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.287911 kubelet[3305]: E0706 23:28:37.287669 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.287911 kubelet[3305]: W0706 23:28:37.287700 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.287911 kubelet[3305]: E0706 23:28:37.287732 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.289593 kubelet[3305]: E0706 23:28:37.289424 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.289593 kubelet[3305]: W0706 23:28:37.289458 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.289593 kubelet[3305]: E0706 23:28:37.289489 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.290416 kubelet[3305]: E0706 23:28:37.290253 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.290416 kubelet[3305]: W0706 23:28:37.290286 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.290416 kubelet[3305]: E0706 23:28:37.290316 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.290997 kubelet[3305]: E0706 23:28:37.290963 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.291354 kubelet[3305]: W0706 23:28:37.291106 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.291354 kubelet[3305]: E0706 23:28:37.291145 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.292204 kubelet[3305]: E0706 23:28:37.292152 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.293585 kubelet[3305]: W0706 23:28:37.292466 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.293585 kubelet[3305]: E0706 23:28:37.293438 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.294490 kubelet[3305]: E0706 23:28:37.294320 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.294490 kubelet[3305]: W0706 23:28:37.294353 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.294490 kubelet[3305]: E0706 23:28:37.294386 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.295039 kubelet[3305]: E0706 23:28:37.294992 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.295391 kubelet[3305]: W0706 23:28:37.295255 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.295391 kubelet[3305]: E0706 23:28:37.295297 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.393351 kubelet[3305]: E0706 23:28:37.393040 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.393351 kubelet[3305]: W0706 23:28:37.393075 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.393351 kubelet[3305]: E0706 23:28:37.393106 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.394521 kubelet[3305]: E0706 23:28:37.394202 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.394521 kubelet[3305]: W0706 23:28:37.394235 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.394521 kubelet[3305]: E0706 23:28:37.394265 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.394913 kubelet[3305]: E0706 23:28:37.394870 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.394997 kubelet[3305]: W0706 23:28:37.394921 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.394997 kubelet[3305]: E0706 23:28:37.394953 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.397521 kubelet[3305]: E0706 23:28:37.397455 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.397521 kubelet[3305]: W0706 23:28:37.397512 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.397788 kubelet[3305]: E0706 23:28:37.397545 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.399538 kubelet[3305]: E0706 23:28:37.399486 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.399538 kubelet[3305]: W0706 23:28:37.399526 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.399836 kubelet[3305]: E0706 23:28:37.399561 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.400004 kubelet[3305]: E0706 23:28:37.399970 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.400004 kubelet[3305]: W0706 23:28:37.399998 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.400386 kubelet[3305]: E0706 23:28:37.400023 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.400589 kubelet[3305]: E0706 23:28:37.400549 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.400589 kubelet[3305]: W0706 23:28:37.400582 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.401332 kubelet[3305]: E0706 23:28:37.400611 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.402524 kubelet[3305]: E0706 23:28:37.402470 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.402524 kubelet[3305]: W0706 23:28:37.402510 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.402917 kubelet[3305]: E0706 23:28:37.402544 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.403148 kubelet[3305]: E0706 23:28:37.403105 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.403148 kubelet[3305]: W0706 23:28:37.403140 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.404032 kubelet[3305]: E0706 23:28:37.403170 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.404032 kubelet[3305]: E0706 23:28:37.403551 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.404032 kubelet[3305]: W0706 23:28:37.403592 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.404032 kubelet[3305]: E0706 23:28:37.403620 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.404032 kubelet[3305]: E0706 23:28:37.403945 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.404032 kubelet[3305]: W0706 23:28:37.403962 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.404032 kubelet[3305]: E0706 23:28:37.403983 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.405159 kubelet[3305]: E0706 23:28:37.404563 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.405159 kubelet[3305]: W0706 23:28:37.404586 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.405159 kubelet[3305]: E0706 23:28:37.404612 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.406641 kubelet[3305]: E0706 23:28:37.406445 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.406641 kubelet[3305]: W0706 23:28:37.406482 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.406641 kubelet[3305]: E0706 23:28:37.406515 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.408277 kubelet[3305]: E0706 23:28:37.408028 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.408277 kubelet[3305]: W0706 23:28:37.408065 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.408277 kubelet[3305]: E0706 23:28:37.408097 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.409557 kubelet[3305]: E0706 23:28:37.409521 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.410304 kubelet[3305]: W0706 23:28:37.410262 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.410620 kubelet[3305]: E0706 23:28:37.410458 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.411148 kubelet[3305]: E0706 23:28:37.411115 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.412092 kubelet[3305]: W0706 23:28:37.411706 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.412092 kubelet[3305]: E0706 23:28:37.411787 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.413238 kubelet[3305]: E0706 23:28:37.412593 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.413463 kubelet[3305]: W0706 23:28:37.413420 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.413606 kubelet[3305]: E0706 23:28:37.413577 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.418273 kubelet[3305]: E0706 23:28:37.418174 3305 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:28:37.418273 kubelet[3305]: W0706 23:28:37.418314 3305 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:28:37.418273 kubelet[3305]: E0706 23:28:37.418347 3305 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:28:37.908214 containerd[2001]: time="2025-07-06T23:28:37.907451931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:37.911319 containerd[2001]: time="2025-07-06T23:28:37.911237259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:28:37.913375 containerd[2001]: time="2025-07-06T23:28:37.913301091Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:37.919224 containerd[2001]: time="2025-07-06T23:28:37.918952059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:37.921075 containerd[2001]: time="2025-07-06T23:28:37.921000927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.247526702s" Jul 6 23:28:37.921388 containerd[2001]: time="2025-07-06T23:28:37.921254416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:28:37.930559 containerd[2001]: time="2025-07-06T23:28:37.930494140Z" level=info msg="CreateContainer within sandbox \"8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:28:37.952215 containerd[2001]: time="2025-07-06T23:28:37.951666580Z" level=info msg="Container ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:37.962009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184363223.mount: Deactivated successfully. Jul 6 23:28:37.973633 containerd[2001]: time="2025-07-06T23:28:37.973556200Z" level=info msg="CreateContainer within sandbox \"8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630\"" Jul 6 23:28:37.974441 containerd[2001]: time="2025-07-06T23:28:37.974374492Z" level=info msg="StartContainer for \"ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630\"" Jul 6 23:28:37.980442 containerd[2001]: time="2025-07-06T23:28:37.980146672Z" level=info msg="connecting to shim ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630" address="unix:///run/containerd/s/9b4a0d28260425e9267ec285a140d6ee24ae617a142f89511d96a41dc022aefb" protocol=ttrpc version=3 Jul 6 23:28:38.026485 systemd[1]: Started cri-containerd-ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630.scope - libcontainer container ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630. Jul 6 23:28:38.105449 containerd[2001]: time="2025-07-06T23:28:38.105366516Z" level=info msg="StartContainer for \"ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630\" returns successfully" Jul 6 23:28:38.139471 systemd[1]: cri-containerd-ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630.scope: Deactivated successfully. Jul 6 23:28:38.143145 containerd[2001]: time="2025-07-06T23:28:38.143065873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630\" id:\"ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630\" pid:4298 exited_at:{seconds:1751844518 nanos:142152613}" Jul 6 23:28:38.143942 containerd[2001]: time="2025-07-06T23:28:38.143858053Z" level=info msg="received exit event container_id:\"ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630\" id:\"ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630\" pid:4298 exited_at:{seconds:1751844518 nanos:142152613}" Jul 6 23:28:38.189914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef09a0337bd4a85b40a7c765955e7f451386b7120284854f6e6f2c9f013d2630-rootfs.mount: Deactivated successfully. Jul 6 23:28:38.211861 kubelet[3305]: I0706 23:28:38.211752 3305 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:28:38.262673 kubelet[3305]: I0706 23:28:38.262537 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8679675f4c-xsgfn" podStartSLOduration=3.341672651 podStartE2EDuration="6.262510777s" podCreationTimestamp="2025-07-06 23:28:32 +0000 UTC" firstStartedPulling="2025-07-06 23:28:33.750828755 +0000 UTC m=+30.118791979" lastFinishedPulling="2025-07-06 23:28:36.671666881 +0000 UTC m=+33.039630105" observedRunningTime="2025-07-06 23:28:37.235957896 +0000 UTC m=+33.603921144" watchObservedRunningTime="2025-07-06 23:28:38.262510777 +0000 UTC m=+34.630474025" Jul 6 23:28:38.981284 kubelet[3305]: E0706 23:28:38.981218 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nmvf" podUID="94edf3c8-6c21-47d9-9a93-a87b343dac2f" Jul 6 23:28:39.224258 containerd[2001]: time="2025-07-06T23:28:39.223308086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:28:40.981092 kubelet[3305]: E0706 23:28:40.980661 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nmvf" podUID="94edf3c8-6c21-47d9-9a93-a87b343dac2f" Jul 6 23:28:42.274902 containerd[2001]: time="2025-07-06T23:28:42.274846541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:42.276843 containerd[2001]: time="2025-07-06T23:28:42.276788537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:28:42.279253 containerd[2001]: time="2025-07-06T23:28:42.279202133Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:42.284099 containerd[2001]: time="2025-07-06T23:28:42.283994693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:42.285314 containerd[2001]: time="2025-07-06T23:28:42.285255809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.061686135s" Jul 6 23:28:42.285437 containerd[2001]: time="2025-07-06T23:28:42.285311429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:28:42.302995 containerd[2001]: time="2025-07-06T23:28:42.302860481Z" level=info msg="CreateContainer within sandbox \"8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:28:42.322715 containerd[2001]: time="2025-07-06T23:28:42.322637009Z" level=info msg="Container 68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:42.329545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848745285.mount: Deactivated successfully. Jul 6 23:28:42.352854 containerd[2001]: time="2025-07-06T23:28:42.352598862Z" level=info msg="CreateContainer within sandbox \"8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f\"" Jul 6 23:28:42.354235 containerd[2001]: time="2025-07-06T23:28:42.353761950Z" level=info msg="StartContainer for \"68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f\"" Jul 6 23:28:42.357800 containerd[2001]: time="2025-07-06T23:28:42.357682650Z" level=info msg="connecting to shim 68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f" address="unix:///run/containerd/s/9b4a0d28260425e9267ec285a140d6ee24ae617a142f89511d96a41dc022aefb" protocol=ttrpc version=3 Jul 6 23:28:42.400510 systemd[1]: Started cri-containerd-68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f.scope - libcontainer container 68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f. Jul 6 23:28:42.490656 containerd[2001]: time="2025-07-06T23:28:42.490525422Z" level=info msg="StartContainer for \"68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f\" returns successfully" Jul 6 23:28:42.982147 kubelet[3305]: E0706 23:28:42.980698 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nmvf" podUID="94edf3c8-6c21-47d9-9a93-a87b343dac2f" Jul 6 23:28:43.467812 containerd[2001]: time="2025-07-06T23:28:43.467735287Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:28:43.473497 systemd[1]: cri-containerd-68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f.scope: Deactivated successfully. Jul 6 23:28:43.474075 systemd[1]: cri-containerd-68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f.scope: Consumed 967ms CPU time, 193.7M memory peak, 165.8M written to disk. Jul 6 23:28:43.478146 containerd[2001]: time="2025-07-06T23:28:43.478055011Z" level=info msg="received exit event container_id:\"68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f\" id:\"68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f\" pid:4360 exited_at:{seconds:1751844523 nanos:476496391}" Jul 6 23:28:43.479744 containerd[2001]: time="2025-07-06T23:28:43.478872883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f\" id:\"68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f\" pid:4360 exited_at:{seconds:1751844523 nanos:476496391}" Jul 6 23:28:43.512622 kubelet[3305]: I0706 23:28:43.512288 3305 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:28:43.545021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68dfbc01deaeeca09af2129baaaad42cbdca123f55675d937e8fdede114cde6f-rootfs.mount: Deactivated successfully. Jul 6 23:28:43.632610 systemd[1]: Created slice kubepods-burstable-pod9146cfa4_447c_451c_906e_da79bf4ea187.slice - libcontainer container kubepods-burstable-pod9146cfa4_447c_451c_906e_da79bf4ea187.slice. Jul 6 23:28:43.637879 kubelet[3305]: I0706 23:28:43.637091 3305 status_manager.go:895] "Failed to get status for pod" podUID="9146cfa4-447c-451c-906e-da79bf4ea187" pod="kube-system/coredns-674b8bbfcf-k846w" err="pods \"coredns-674b8bbfcf-k846w\" is forbidden: User \"system:node:ip-172-31-19-251\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-251' and this object" Jul 6 23:28:43.640757 kubelet[3305]: E0706 23:28:43.639546 3305 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-19-251\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-251' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Jul 6 23:28:43.643499 kubelet[3305]: I0706 23:28:43.643450 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9146cfa4-447c-451c-906e-da79bf4ea187-config-volume\") pod \"coredns-674b8bbfcf-k846w\" (UID: \"9146cfa4-447c-451c-906e-da79bf4ea187\") " pod="kube-system/coredns-674b8bbfcf-k846w" Jul 6 23:28:43.644306 kubelet[3305]: I0706 23:28:43.644262 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4j6\" (UniqueName: \"kubernetes.io/projected/9146cfa4-447c-451c-906e-da79bf4ea187-kube-api-access-vs4j6\") pod \"coredns-674b8bbfcf-k846w\" (UID: \"9146cfa4-447c-451c-906e-da79bf4ea187\") " pod="kube-system/coredns-674b8bbfcf-k846w" Jul 6 23:28:43.685825 systemd[1]: Created slice kubepods-burstable-podc2a48b47_c513_49cb_b4ab_a214c8933d5d.slice - libcontainer container kubepods-burstable-podc2a48b47_c513_49cb_b4ab_a214c8933d5d.slice. Jul 6 23:28:43.774313 kubelet[3305]: I0706 23:28:43.745476 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b168d18-462c-4fdf-bcac-01c36756c318-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-dxdt2\" (UID: \"1b168d18-462c-4fdf-bcac-01c36756c318\") " pod="calico-system/goldmane-768f4c5c69-dxdt2" Jul 6 23:28:43.774313 kubelet[3305]: I0706 23:28:43.746817 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b168d18-462c-4fdf-bcac-01c36756c318-config\") pod \"goldmane-768f4c5c69-dxdt2\" (UID: \"1b168d18-462c-4fdf-bcac-01c36756c318\") " pod="calico-system/goldmane-768f4c5c69-dxdt2" Jul 6 23:28:43.774313 kubelet[3305]: I0706 23:28:43.746918 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndl4m\" (UniqueName: \"kubernetes.io/projected/c2a48b47-c513-49cb-b4ab-a214c8933d5d-kube-api-access-ndl4m\") pod \"coredns-674b8bbfcf-blcpd\" (UID: \"c2a48b47-c513-49cb-b4ab-a214c8933d5d\") " pod="kube-system/coredns-674b8bbfcf-blcpd" Jul 6 23:28:43.774313 kubelet[3305]: I0706 23:28:43.747019 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2a48b47-c513-49cb-b4ab-a214c8933d5d-config-volume\") pod \"coredns-674b8bbfcf-blcpd\" (UID: \"c2a48b47-c513-49cb-b4ab-a214c8933d5d\") " pod="kube-system/coredns-674b8bbfcf-blcpd" Jul 6 23:28:43.774313 kubelet[3305]: I0706 23:28:43.747107 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1b168d18-462c-4fdf-bcac-01c36756c318-goldmane-key-pair\") pod \"goldmane-768f4c5c69-dxdt2\" (UID: \"1b168d18-462c-4fdf-bcac-01c36756c318\") " pod="calico-system/goldmane-768f4c5c69-dxdt2" Jul 6 23:28:43.755824 systemd[1]: Created slice kubepods-besteffort-pod1b168d18_462c_4fdf_bcac_01c36756c318.slice - libcontainer container kubepods-besteffort-pod1b168d18_462c_4fdf_bcac_01c36756c318.slice. Jul 6 23:28:43.774831 kubelet[3305]: I0706 23:28:43.747210 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwqxl\" (UniqueName: \"kubernetes.io/projected/1b168d18-462c-4fdf-bcac-01c36756c318-kube-api-access-bwqxl\") pod \"goldmane-768f4c5c69-dxdt2\" (UID: \"1b168d18-462c-4fdf-bcac-01c36756c318\") " pod="calico-system/goldmane-768f4c5c69-dxdt2" Jul 6 23:28:43.823167 systemd[1]: Created slice kubepods-besteffort-pod078364f1_485e_48a1_bf69_3a39df214c85.slice - libcontainer container kubepods-besteffort-pod078364f1_485e_48a1_bf69_3a39df214c85.slice. Jul 6 23:28:43.849741 kubelet[3305]: I0706 23:28:43.847510 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rkk\" (UniqueName: \"kubernetes.io/projected/078364f1-485e-48a1-bf69-3a39df214c85-kube-api-access-47rkk\") pod \"calico-kube-controllers-cd4c5f4bd-4pgc4\" (UID: \"078364f1-485e-48a1-bf69-3a39df214c85\") " pod="calico-system/calico-kube-controllers-cd4c5f4bd-4pgc4" Jul 6 23:28:43.849741 kubelet[3305]: I0706 23:28:43.847697 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/078364f1-485e-48a1-bf69-3a39df214c85-tigera-ca-bundle\") pod \"calico-kube-controllers-cd4c5f4bd-4pgc4\" (UID: \"078364f1-485e-48a1-bf69-3a39df214c85\") " pod="calico-system/calico-kube-controllers-cd4c5f4bd-4pgc4" Jul 6 23:28:43.891688 systemd[1]: Created slice kubepods-besteffort-pod96c13034_9cd7_46a4_9bf2_d29d22006954.slice - libcontainer container kubepods-besteffort-pod96c13034_9cd7_46a4_9bf2_d29d22006954.slice. Jul 6 23:28:43.951224 kubelet[3305]: I0706 23:28:43.948812 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrc5x\" (UniqueName: \"kubernetes.io/projected/0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac-kube-api-access-nrc5x\") pod \"calico-apiserver-6bffc59bcd-cbzd2\" (UID: \"0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac\") " pod="calico-apiserver/calico-apiserver-6bffc59bcd-cbzd2" Jul 6 23:28:43.957797 kubelet[3305]: I0706 23:28:43.956398 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac-calico-apiserver-certs\") pod \"calico-apiserver-6bffc59bcd-cbzd2\" (UID: \"0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac\") " pod="calico-apiserver/calico-apiserver-6bffc59bcd-cbzd2" Jul 6 23:28:43.964714 kubelet[3305]: I0706 23:28:43.964656 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96c13034-9cd7-46a4-9bf2-d29d22006954-calico-apiserver-certs\") pod \"calico-apiserver-6bffc59bcd-pgz5t\" (UID: \"96c13034-9cd7-46a4-9bf2-d29d22006954\") " pod="calico-apiserver/calico-apiserver-6bffc59bcd-pgz5t" Jul 6 23:28:43.964916 kubelet[3305]: I0706 23:28:43.964821 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzd4\" (UniqueName: \"kubernetes.io/projected/96c13034-9cd7-46a4-9bf2-d29d22006954-kube-api-access-hmzd4\") pod \"calico-apiserver-6bffc59bcd-pgz5t\" (UID: \"96c13034-9cd7-46a4-9bf2-d29d22006954\") " pod="calico-apiserver/calico-apiserver-6bffc59bcd-pgz5t" Jul 6 23:28:43.999533 systemd[1]: Created slice kubepods-besteffort-pod0e91b1cd_dd06_4ca1_a59b_5b9bd40fbcac.slice - libcontainer container kubepods-besteffort-pod0e91b1cd_dd06_4ca1_a59b_5b9bd40fbcac.slice. Jul 6 23:28:44.061443 systemd[1]: Created slice kubepods-besteffort-pod4391c5b3_1bbe_4d20_b297_58de9373de07.slice - libcontainer container kubepods-besteffort-pod4391c5b3_1bbe_4d20_b297_58de9373de07.slice. Jul 6 23:28:44.066428 kubelet[3305]: I0706 23:28:44.065546 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rh25\" (UniqueName: \"kubernetes.io/projected/4391c5b3-1bbe-4d20-b297-58de9373de07-kube-api-access-8rh25\") pod \"whisker-b56789b65-44456\" (UID: \"4391c5b3-1bbe-4d20-b297-58de9373de07\") " pod="calico-system/whisker-b56789b65-44456" Jul 6 23:28:44.067387 kubelet[3305]: I0706 23:28:44.067217 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-backend-key-pair\") pod \"whisker-b56789b65-44456\" (UID: \"4391c5b3-1bbe-4d20-b297-58de9373de07\") " pod="calico-system/whisker-b56789b65-44456" Jul 6 23:28:44.067536 kubelet[3305]: I0706 23:28:44.067412 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-ca-bundle\") pod \"whisker-b56789b65-44456\" (UID: \"4391c5b3-1bbe-4d20-b297-58de9373de07\") " pod="calico-system/whisker-b56789b65-44456" Jul 6 23:28:44.084913 containerd[2001]: time="2025-07-06T23:28:44.084843894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dxdt2,Uid:1b168d18-462c-4fdf-bcac-01c36756c318,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:44.154613 containerd[2001]: time="2025-07-06T23:28:44.154368270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4c5f4bd-4pgc4,Uid:078364f1-485e-48a1-bf69-3a39df214c85,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:44.227568 containerd[2001]: time="2025-07-06T23:28:44.227511187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-pgz5t,Uid:96c13034-9cd7-46a4-9bf2-d29d22006954,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:44.282947 containerd[2001]: time="2025-07-06T23:28:44.282464755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:28:44.328725 containerd[2001]: time="2025-07-06T23:28:44.327945919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-cbzd2,Uid:0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:44.382578 containerd[2001]: time="2025-07-06T23:28:44.382463372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b56789b65-44456,Uid:4391c5b3-1bbe-4d20-b297-58de9373de07,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:44.413802 containerd[2001]: time="2025-07-06T23:28:44.413658908Z" level=error msg="Failed to destroy network for sandbox \"b6db35b00c111cef2e3b6637a0f92836dc640c6660c4dc8f585e3594b55a09de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.422640 containerd[2001]: time="2025-07-06T23:28:44.422113820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4c5f4bd-4pgc4,Uid:078364f1-485e-48a1-bf69-3a39df214c85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6db35b00c111cef2e3b6637a0f92836dc640c6660c4dc8f585e3594b55a09de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.424554 kubelet[3305]: E0706 23:28:44.424419 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6db35b00c111cef2e3b6637a0f92836dc640c6660c4dc8f585e3594b55a09de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.424554 kubelet[3305]: E0706 23:28:44.424552 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6db35b00c111cef2e3b6637a0f92836dc640c6660c4dc8f585e3594b55a09de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd4c5f4bd-4pgc4" Jul 6 23:28:44.425063 kubelet[3305]: E0706 23:28:44.424589 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6db35b00c111cef2e3b6637a0f92836dc640c6660c4dc8f585e3594b55a09de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd4c5f4bd-4pgc4" Jul 6 23:28:44.425063 kubelet[3305]: E0706 23:28:44.424664 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd4c5f4bd-4pgc4_calico-system(078364f1-485e-48a1-bf69-3a39df214c85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd4c5f4bd-4pgc4_calico-system(078364f1-485e-48a1-bf69-3a39df214c85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6db35b00c111cef2e3b6637a0f92836dc640c6660c4dc8f585e3594b55a09de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd4c5f4bd-4pgc4" podUID="078364f1-485e-48a1-bf69-3a39df214c85" Jul 6 23:28:44.443811 containerd[2001]: time="2025-07-06T23:28:44.443740880Z" level=error msg="Failed to destroy network for sandbox \"58de96ae3f91cd9ad1715771ab3cb86f8588c4dde9661692541280cb7a745181\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.448720 containerd[2001]: time="2025-07-06T23:28:44.448305104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dxdt2,Uid:1b168d18-462c-4fdf-bcac-01c36756c318,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58de96ae3f91cd9ad1715771ab3cb86f8588c4dde9661692541280cb7a745181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.451237 kubelet[3305]: E0706 23:28:44.450137 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58de96ae3f91cd9ad1715771ab3cb86f8588c4dde9661692541280cb7a745181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.451237 kubelet[3305]: E0706 23:28:44.450271 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58de96ae3f91cd9ad1715771ab3cb86f8588c4dde9661692541280cb7a745181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-dxdt2" Jul 6 23:28:44.451237 kubelet[3305]: E0706 23:28:44.450309 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58de96ae3f91cd9ad1715771ab3cb86f8588c4dde9661692541280cb7a745181\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-dxdt2" Jul 6 23:28:44.451525 kubelet[3305]: E0706 23:28:44.450390 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-dxdt2_calico-system(1b168d18-462c-4fdf-bcac-01c36756c318)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-dxdt2_calico-system(1b168d18-462c-4fdf-bcac-01c36756c318)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58de96ae3f91cd9ad1715771ab3cb86f8588c4dde9661692541280cb7a745181\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-dxdt2" podUID="1b168d18-462c-4fdf-bcac-01c36756c318" Jul 6 23:28:44.535793 containerd[2001]: time="2025-07-06T23:28:44.535619276Z" level=error msg="Failed to destroy network for sandbox \"8b836882c1ed6a8619e6337ec3e4433a15784626dd4cc5fdccc41b4346ba7e1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.544105 containerd[2001]: time="2025-07-06T23:28:44.543616076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-pgz5t,Uid:96c13034-9cd7-46a4-9bf2-d29d22006954,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b836882c1ed6a8619e6337ec3e4433a15784626dd4cc5fdccc41b4346ba7e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.544421 kubelet[3305]: E0706 23:28:44.544321 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b836882c1ed6a8619e6337ec3e4433a15784626dd4cc5fdccc41b4346ba7e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.544421 kubelet[3305]: E0706 23:28:44.544407 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b836882c1ed6a8619e6337ec3e4433a15784626dd4cc5fdccc41b4346ba7e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bffc59bcd-pgz5t" Jul 6 23:28:44.544591 kubelet[3305]: E0706 23:28:44.544441 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b836882c1ed6a8619e6337ec3e4433a15784626dd4cc5fdccc41b4346ba7e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bffc59bcd-pgz5t" Jul 6 23:28:44.549145 kubelet[3305]: E0706 23:28:44.548384 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bffc59bcd-pgz5t_calico-apiserver(96c13034-9cd7-46a4-9bf2-d29d22006954)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bffc59bcd-pgz5t_calico-apiserver(96c13034-9cd7-46a4-9bf2-d29d22006954)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b836882c1ed6a8619e6337ec3e4433a15784626dd4cc5fdccc41b4346ba7e1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bffc59bcd-pgz5t" podUID="96c13034-9cd7-46a4-9bf2-d29d22006954" Jul 6 23:28:44.607840 systemd[1]: run-netns-cni\x2dd8007194\x2d6cd9\x2dfd52\x2dd918\x2d5269b4f8db47.mount: Deactivated successfully. Jul 6 23:28:44.639970 containerd[2001]: time="2025-07-06T23:28:44.639846669Z" level=error msg="Failed to destroy network for sandbox \"a6887b7cc0025ce670af763705cf3fd505dfd685d29f6fba2be3be83cd931186\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.643983 containerd[2001]: time="2025-07-06T23:28:44.643349685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-cbzd2,Uid:0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6887b7cc0025ce670af763705cf3fd505dfd685d29f6fba2be3be83cd931186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.646296 kubelet[3305]: E0706 23:28:44.646151 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6887b7cc0025ce670af763705cf3fd505dfd685d29f6fba2be3be83cd931186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.646296 kubelet[3305]: E0706 23:28:44.646267 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6887b7cc0025ce670af763705cf3fd505dfd685d29f6fba2be3be83cd931186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bffc59bcd-cbzd2" Jul 6 23:28:44.646530 kubelet[3305]: E0706 23:28:44.646303 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6887b7cc0025ce670af763705cf3fd505dfd685d29f6fba2be3be83cd931186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bffc59bcd-cbzd2" Jul 6 23:28:44.646530 kubelet[3305]: E0706 23:28:44.646435 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bffc59bcd-cbzd2_calico-apiserver(0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bffc59bcd-cbzd2_calico-apiserver(0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6887b7cc0025ce670af763705cf3fd505dfd685d29f6fba2be3be83cd931186\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bffc59bcd-cbzd2" podUID="0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac" Jul 6 23:28:44.648406 containerd[2001]: time="2025-07-06T23:28:44.648231717Z" level=error msg="Failed to destroy network for sandbox \"7563b376675cc37497719e1063344499a52a1ae97bf13e2304cf6f4a8ef2faf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.653002 containerd[2001]: time="2025-07-06T23:28:44.652525641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b56789b65-44456,Uid:4391c5b3-1bbe-4d20-b297-58de9373de07,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7563b376675cc37497719e1063344499a52a1ae97bf13e2304cf6f4a8ef2faf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.653719 systemd[1]: run-netns-cni\x2dae6930bb\x2dab07\x2d36bb\x2dec1c\x2d646adc8ec526.mount: Deactivated successfully. Jul 6 23:28:44.658128 kubelet[3305]: E0706 23:28:44.657271 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7563b376675cc37497719e1063344499a52a1ae97bf13e2304cf6f4a8ef2faf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:44.658946 kubelet[3305]: E0706 23:28:44.658062 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7563b376675cc37497719e1063344499a52a1ae97bf13e2304cf6f4a8ef2faf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b56789b65-44456" Jul 6 23:28:44.658946 kubelet[3305]: E0706 23:28:44.658268 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7563b376675cc37497719e1063344499a52a1ae97bf13e2304cf6f4a8ef2faf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b56789b65-44456" Jul 6 23:28:44.660531 kubelet[3305]: E0706 23:28:44.659537 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b56789b65-44456_calico-system(4391c5b3-1bbe-4d20-b297-58de9373de07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b56789b65-44456_calico-system(4391c5b3-1bbe-4d20-b297-58de9373de07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7563b376675cc37497719e1063344499a52a1ae97bf13e2304cf6f4a8ef2faf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b56789b65-44456" podUID="4391c5b3-1bbe-4d20-b297-58de9373de07" Jul 6 23:28:44.668377 systemd[1]: run-netns-cni\x2dd177fb9e\x2dd002\x2d7d29\x2de281\x2d548c1ec4bb3e.mount: Deactivated successfully. Jul 6 23:28:44.746073 kubelet[3305]: E0706 23:28:44.746005 3305 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:44.746416 kubelet[3305]: E0706 23:28:44.746250 3305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9146cfa4-447c-451c-906e-da79bf4ea187-config-volume podName:9146cfa4-447c-451c-906e-da79bf4ea187 nodeName:}" failed. No retries permitted until 2025-07-06 23:28:45.246174277 +0000 UTC m=+41.614137597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9146cfa4-447c-451c-906e-da79bf4ea187-config-volume") pod "coredns-674b8bbfcf-k846w" (UID: "9146cfa4-447c-451c-906e-da79bf4ea187") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:44.849572 kubelet[3305]: E0706 23:28:44.849368 3305 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:44.849572 kubelet[3305]: E0706 23:28:44.849547 3305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2a48b47-c513-49cb-b4ab-a214c8933d5d-config-volume podName:c2a48b47-c513-49cb-b4ab-a214c8933d5d nodeName:}" failed. No retries permitted until 2025-07-06 23:28:45.34951643 +0000 UTC m=+41.717479666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c2a48b47-c513-49cb-b4ab-a214c8933d5d-config-volume") pod "coredns-674b8bbfcf-blcpd" (UID: "c2a48b47-c513-49cb-b4ab-a214c8933d5d") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:28:44.999938 systemd[1]: Created slice kubepods-besteffort-pod94edf3c8_6c21_47d9_9a93_a87b343dac2f.slice - libcontainer container kubepods-besteffort-pod94edf3c8_6c21_47d9_9a93_a87b343dac2f.slice. Jul 6 23:28:45.006045 containerd[2001]: time="2025-07-06T23:28:45.005988235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nmvf,Uid:94edf3c8-6c21-47d9-9a93-a87b343dac2f,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:45.116237 containerd[2001]: time="2025-07-06T23:28:45.116045131Z" level=error msg="Failed to destroy network for sandbox \"bf41d8ca42ae91f891b4e82d8edc22b178b4677cc5be2fe18b8b57ac7659bbf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.119162 containerd[2001]: time="2025-07-06T23:28:45.119056675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nmvf,Uid:94edf3c8-6c21-47d9-9a93-a87b343dac2f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf41d8ca42ae91f891b4e82d8edc22b178b4677cc5be2fe18b8b57ac7659bbf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.119693 kubelet[3305]: E0706 23:28:45.119608 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf41d8ca42ae91f891b4e82d8edc22b178b4677cc5be2fe18b8b57ac7659bbf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.120513 kubelet[3305]: E0706 23:28:45.119714 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf41d8ca42ae91f891b4e82d8edc22b178b4677cc5be2fe18b8b57ac7659bbf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5nmvf" Jul 6 23:28:45.120513 kubelet[3305]: E0706 23:28:45.119767 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf41d8ca42ae91f891b4e82d8edc22b178b4677cc5be2fe18b8b57ac7659bbf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5nmvf" Jul 6 23:28:45.120981 kubelet[3305]: E0706 23:28:45.119885 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5nmvf_calico-system(94edf3c8-6c21-47d9-9a93-a87b343dac2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5nmvf_calico-system(94edf3c8-6c21-47d9-9a93-a87b343dac2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf41d8ca42ae91f891b4e82d8edc22b178b4677cc5be2fe18b8b57ac7659bbf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5nmvf" podUID="94edf3c8-6c21-47d9-9a93-a87b343dac2f" Jul 6 23:28:45.458963 containerd[2001]: time="2025-07-06T23:28:45.458489157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k846w,Uid:9146cfa4-447c-451c-906e-da79bf4ea187,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:45.524100 containerd[2001]: time="2025-07-06T23:28:45.524015361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-blcpd,Uid:c2a48b47-c513-49cb-b4ab-a214c8933d5d,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:45.547647 systemd[1]: run-netns-cni\x2d1b9904ef\x2d6d2c\x2d9d25\x2d4dcf\x2dd4d8ac2c8591.mount: Deactivated successfully. Jul 6 23:28:45.664435 containerd[2001]: time="2025-07-06T23:28:45.664367146Z" level=error msg="Failed to destroy network for sandbox \"f89d60dcbdd42930d99d3ea071d9a48b8ac838e301de382e207936dcc54b728a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.669711 containerd[2001]: time="2025-07-06T23:28:45.669560674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k846w,Uid:9146cfa4-447c-451c-906e-da79bf4ea187,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89d60dcbdd42930d99d3ea071d9a48b8ac838e301de382e207936dcc54b728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.671134 kubelet[3305]: E0706 23:28:45.670773 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89d60dcbdd42930d99d3ea071d9a48b8ac838e301de382e207936dcc54b728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.671134 kubelet[3305]: E0706 23:28:45.671069 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89d60dcbdd42930d99d3ea071d9a48b8ac838e301de382e207936dcc54b728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k846w" Jul 6 23:28:45.673072 kubelet[3305]: E0706 23:28:45.671701 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89d60dcbdd42930d99d3ea071d9a48b8ac838e301de382e207936dcc54b728a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k846w" Jul 6 23:28:45.675452 kubelet[3305]: E0706 23:28:45.673372 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k846w_kube-system(9146cfa4-447c-451c-906e-da79bf4ea187)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k846w_kube-system(9146cfa4-447c-451c-906e-da79bf4ea187)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f89d60dcbdd42930d99d3ea071d9a48b8ac838e301de382e207936dcc54b728a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k846w" podUID="9146cfa4-447c-451c-906e-da79bf4ea187" Jul 6 23:28:45.674837 systemd[1]: run-netns-cni\x2da340e242\x2db487\x2ddfe6\x2d17b0\x2d56d35edbcbfe.mount: Deactivated successfully. Jul 6 23:28:45.731304 containerd[2001]: time="2025-07-06T23:28:45.731094214Z" level=error msg="Failed to destroy network for sandbox \"9a32d1c054dc74cba8369d31161ae1ce6166027bbd7814da9cc9417914dbe973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.736437 containerd[2001]: time="2025-07-06T23:28:45.736363078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-blcpd,Uid:c2a48b47-c513-49cb-b4ab-a214c8933d5d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a32d1c054dc74cba8369d31161ae1ce6166027bbd7814da9cc9417914dbe973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.740322 kubelet[3305]: E0706 23:28:45.736913 3305 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a32d1c054dc74cba8369d31161ae1ce6166027bbd7814da9cc9417914dbe973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:28:45.737896 systemd[1]: run-netns-cni\x2d905e6ca1\x2d9456\x2d9d25\x2d74b9\x2d250bf164d5ec.mount: Deactivated successfully. Jul 6 23:28:45.743634 kubelet[3305]: E0706 23:28:45.740723 3305 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a32d1c054dc74cba8369d31161ae1ce6166027bbd7814da9cc9417914dbe973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-blcpd" Jul 6 23:28:45.743634 kubelet[3305]: E0706 23:28:45.740782 3305 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a32d1c054dc74cba8369d31161ae1ce6166027bbd7814da9cc9417914dbe973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-blcpd" Jul 6 23:28:45.743634 kubelet[3305]: E0706 23:28:45.740963 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-blcpd_kube-system(c2a48b47-c513-49cb-b4ab-a214c8933d5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-blcpd_kube-system(c2a48b47-c513-49cb-b4ab-a214c8933d5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a32d1c054dc74cba8369d31161ae1ce6166027bbd7814da9cc9417914dbe973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-blcpd" podUID="c2a48b47-c513-49cb-b4ab-a214c8933d5d" Jul 6 23:28:51.327493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820390469.mount: Deactivated successfully. Jul 6 23:28:51.403246 containerd[2001]: time="2025-07-06T23:28:51.402148574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:51.406444 containerd[2001]: time="2025-07-06T23:28:51.406367222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:28:51.409157 containerd[2001]: time="2025-07-06T23:28:51.409083626Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:51.414370 containerd[2001]: time="2025-07-06T23:28:51.414306003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:51.416122 containerd[2001]: time="2025-07-06T23:28:51.415818363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.133275152s" Jul 6 23:28:51.416122 containerd[2001]: time="2025-07-06T23:28:51.415891779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:28:51.464218 containerd[2001]: time="2025-07-06T23:28:51.464137935Z" level=info msg="CreateContainer within sandbox \"8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:28:51.496575 containerd[2001]: time="2025-07-06T23:28:51.494163627Z" level=info msg="Container 1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:51.519304 containerd[2001]: time="2025-07-06T23:28:51.519162879Z" level=info msg="CreateContainer within sandbox \"8d65d3d864df21b830cdcaaff5134d8f2d42a4f431bdb78bfc1b013e4198d966\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\"" Jul 6 23:28:51.520902 containerd[2001]: time="2025-07-06T23:28:51.520268547Z" level=info msg="StartContainer for \"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\"" Jul 6 23:28:51.524252 containerd[2001]: time="2025-07-06T23:28:51.524157279Z" level=info msg="connecting to shim 1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23" address="unix:///run/containerd/s/9b4a0d28260425e9267ec285a140d6ee24ae617a142f89511d96a41dc022aefb" protocol=ttrpc version=3 Jul 6 23:28:51.606697 systemd[1]: Started cri-containerd-1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23.scope - libcontainer container 1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23. Jul 6 23:28:51.714115 containerd[2001]: time="2025-07-06T23:28:51.713945452Z" level=info msg="StartContainer for \"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\" returns successfully" Jul 6 23:28:51.995049 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:28:51.996095 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:28:52.348548 kubelet[3305]: I0706 23:28:52.348383 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rh25\" (UniqueName: \"kubernetes.io/projected/4391c5b3-1bbe-4d20-b297-58de9373de07-kube-api-access-8rh25\") pod \"4391c5b3-1bbe-4d20-b297-58de9373de07\" (UID: \"4391c5b3-1bbe-4d20-b297-58de9373de07\") " Jul 6 23:28:52.350857 kubelet[3305]: I0706 23:28:52.350338 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-ca-bundle\") pod \"4391c5b3-1bbe-4d20-b297-58de9373de07\" (UID: \"4391c5b3-1bbe-4d20-b297-58de9373de07\") " Jul 6 23:28:52.350857 kubelet[3305]: I0706 23:28:52.350421 3305 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-backend-key-pair\") pod \"4391c5b3-1bbe-4d20-b297-58de9373de07\" (UID: \"4391c5b3-1bbe-4d20-b297-58de9373de07\") " Jul 6 23:28:52.353105 kubelet[3305]: I0706 23:28:52.351654 3305 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4391c5b3-1bbe-4d20-b297-58de9373de07" (UID: "4391c5b3-1bbe-4d20-b297-58de9373de07"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:28:52.367230 kubelet[3305]: I0706 23:28:52.366547 3305 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4391c5b3-1bbe-4d20-b297-58de9373de07-kube-api-access-8rh25" (OuterVolumeSpecName: "kube-api-access-8rh25") pod "4391c5b3-1bbe-4d20-b297-58de9373de07" (UID: "4391c5b3-1bbe-4d20-b297-58de9373de07"). InnerVolumeSpecName "kube-api-access-8rh25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:28:52.368918 systemd[1]: var-lib-kubelet-pods-4391c5b3\x2d1bbe\x2d4d20\x2db297\x2d58de9373de07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8rh25.mount: Deactivated successfully. Jul 6 23:28:52.372229 kubelet[3305]: I0706 23:28:52.370519 3305 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4391c5b3-1bbe-4d20-b297-58de9373de07" (UID: "4391c5b3-1bbe-4d20-b297-58de9373de07"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:28:52.379031 systemd[1]: var-lib-kubelet-pods-4391c5b3\x2d1bbe\x2d4d20\x2db297\x2d58de9373de07-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:28:52.390695 kubelet[3305]: I0706 23:28:52.390502 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4ql6n" podStartSLOduration=2.8486810030000003 podStartE2EDuration="20.389866515s" podCreationTimestamp="2025-07-06 23:28:32 +0000 UTC" firstStartedPulling="2025-07-06 23:28:33.877486787 +0000 UTC m=+30.245450023" lastFinishedPulling="2025-07-06 23:28:51.418672299 +0000 UTC m=+47.786635535" observedRunningTime="2025-07-06 23:28:52.388848663 +0000 UTC m=+48.756811911" watchObservedRunningTime="2025-07-06 23:28:52.389866515 +0000 UTC m=+48.758428563" Jul 6 23:28:52.451721 kubelet[3305]: I0706 23:28:52.451369 3305 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8rh25\" (UniqueName: \"kubernetes.io/projected/4391c5b3-1bbe-4d20-b297-58de9373de07-kube-api-access-8rh25\") on node \"ip-172-31-19-251\" DevicePath \"\"" Jul 6 23:28:52.451721 kubelet[3305]: I0706 23:28:52.451472 3305 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-ca-bundle\") on node \"ip-172-31-19-251\" DevicePath \"\"" Jul 6 23:28:52.451721 kubelet[3305]: I0706 23:28:52.451603 3305 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4391c5b3-1bbe-4d20-b297-58de9373de07-whisker-backend-key-pair\") on node \"ip-172-31-19-251\" DevicePath \"\"" Jul 6 23:28:52.639563 systemd[1]: Removed slice kubepods-besteffort-pod4391c5b3_1bbe_4d20_b297_58de9373de07.slice - libcontainer container kubepods-besteffort-pod4391c5b3_1bbe_4d20_b297_58de9373de07.slice. Jul 6 23:28:52.784389 systemd[1]: Created slice kubepods-besteffort-pod0a9ac80f_53ca_40e1_b232_d12178d09965.slice - libcontainer container kubepods-besteffort-pod0a9ac80f_53ca_40e1_b232_d12178d09965.slice. Jul 6 23:28:52.854797 kubelet[3305]: I0706 23:28:52.854662 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a9ac80f-53ca-40e1-b232-d12178d09965-whisker-ca-bundle\") pod \"whisker-6dc9bbdfff-29tsm\" (UID: \"0a9ac80f-53ca-40e1-b232-d12178d09965\") " pod="calico-system/whisker-6dc9bbdfff-29tsm" Jul 6 23:28:52.854797 kubelet[3305]: I0706 23:28:52.854755 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a9ac80f-53ca-40e1-b232-d12178d09965-whisker-backend-key-pair\") pod \"whisker-6dc9bbdfff-29tsm\" (UID: \"0a9ac80f-53ca-40e1-b232-d12178d09965\") " pod="calico-system/whisker-6dc9bbdfff-29tsm" Jul 6 23:28:52.854797 kubelet[3305]: I0706 23:28:52.854806 3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxpjd\" (UniqueName: \"kubernetes.io/projected/0a9ac80f-53ca-40e1-b232-d12178d09965-kube-api-access-rxpjd\") pod \"whisker-6dc9bbdfff-29tsm\" (UID: \"0a9ac80f-53ca-40e1-b232-d12178d09965\") " pod="calico-system/whisker-6dc9bbdfff-29tsm" Jul 6 23:28:53.093226 containerd[2001]: time="2025-07-06T23:28:53.093129771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc9bbdfff-29tsm,Uid:0a9ac80f-53ca-40e1-b232-d12178d09965,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:53.464454 systemd-networkd[1805]: cali62333254fd8: Link UP Jul 6 23:28:53.464866 systemd-networkd[1805]: cali62333254fd8: Gained carrier Jul 6 23:28:53.467864 (udev-worker)[4648]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:28:53.507960 containerd[2001]: 2025-07-06 23:28:53.143 [INFO][4677] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:28:53.507960 containerd[2001]: 2025-07-06 23:28:53.240 [INFO][4677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0 whisker-6dc9bbdfff- calico-system 0a9ac80f-53ca-40e1-b232-d12178d09965 942 0 2025-07-06 23:28:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dc9bbdfff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-251 whisker-6dc9bbdfff-29tsm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali62333254fd8 [] [] }} ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-" Jul 6 23:28:53.507960 containerd[2001]: 2025-07-06 23:28:53.240 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" Jul 6 23:28:53.507960 containerd[2001]: 2025-07-06 23:28:53.343 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" HandleID="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Workload="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.343 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" HandleID="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Workload="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000320730), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-251", "pod":"whisker-6dc9bbdfff-29tsm", "timestamp":"2025-07-06 23:28:53.343383232 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.343 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.343 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.344 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.366 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" host="ip-172-31-19-251" Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.382 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.392 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.396 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:53.508397 containerd[2001]: 2025-07-06 23:28:53.400 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:53.508842 containerd[2001]: 2025-07-06 23:28:53.400 [INFO][4688] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" host="ip-172-31-19-251" Jul 6 23:28:53.508842 containerd[2001]: 2025-07-06 23:28:53.403 [INFO][4688] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0 Jul 6 23:28:53.508842 containerd[2001]: 2025-07-06 23:28:53.415 [INFO][4688] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" host="ip-172-31-19-251" Jul 6 23:28:53.508842 containerd[2001]: 2025-07-06 23:28:53.434 [INFO][4688] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" host="ip-172-31-19-251" Jul 6 23:28:53.508842 containerd[2001]: 2025-07-06 23:28:53.434 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" host="ip-172-31-19-251" Jul 6 23:28:53.508842 containerd[2001]: 2025-07-06 23:28:53.434 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:53.508842 containerd[2001]: 2025-07-06 23:28:53.434 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" HandleID="k8s-pod-network.764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Workload="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" Jul 6 23:28:53.509235 containerd[2001]: 2025-07-06 23:28:53.445 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0", GenerateName:"whisker-6dc9bbdfff-", Namespace:"calico-system", SelfLink:"", UID:"0a9ac80f-53ca-40e1-b232-d12178d09965", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dc9bbdfff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"whisker-6dc9bbdfff-29tsm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62333254fd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:53.509235 containerd[2001]: 2025-07-06 23:28:53.446 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" Jul 6 23:28:53.509466 containerd[2001]: 2025-07-06 23:28:53.446 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62333254fd8 ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" Jul 6 23:28:53.509466 containerd[2001]: 2025-07-06 23:28:53.466 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" Jul 6 23:28:53.509608 containerd[2001]: 2025-07-06 23:28:53.470 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0", GenerateName:"whisker-6dc9bbdfff-", Namespace:"calico-system", SelfLink:"", UID:"0a9ac80f-53ca-40e1-b232-d12178d09965", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dc9bbdfff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0", Pod:"whisker-6dc9bbdfff-29tsm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62333254fd8", MAC:"7e:c0:d0:d2:b8:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:53.509769 containerd[2001]: 2025-07-06 23:28:53.499 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" Namespace="calico-system" Pod="whisker-6dc9bbdfff-29tsm" WorkloadEndpoint="ip--172--31--19--251-k8s-whisker--6dc9bbdfff--29tsm-eth0" Jul 6 23:28:53.597304 containerd[2001]: time="2025-07-06T23:28:53.596834069Z" level=info msg="connecting to shim 764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0" address="unix:///run/containerd/s/5c42b6ee310866be53a859c6c1de05a7111c1c54adb4def294bbbeacdc094433" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:53.635223 containerd[2001]: time="2025-07-06T23:28:53.634952190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\" id:\"e2786e83389e8b79f4e15dcf67b72708af699b088d4b35c855c73cc89c82270c\" pid:4707 exit_status:1 exited_at:{seconds:1751844533 nanos:634462782}" Jul 6 23:28:53.678537 systemd[1]: Started cri-containerd-764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0.scope - libcontainer container 764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0. Jul 6 23:28:53.770120 containerd[2001]: time="2025-07-06T23:28:53.769903650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc9bbdfff-29tsm,Uid:0a9ac80f-53ca-40e1-b232-d12178d09965,Namespace:calico-system,Attempt:0,} returns sandbox id \"764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0\"" Jul 6 23:28:53.775843 containerd[2001]: time="2025-07-06T23:28:53.775766454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:28:53.992046 kubelet[3305]: I0706 23:28:53.991898 3305 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4391c5b3-1bbe-4d20-b297-58de9373de07" path="/var/lib/kubelet/pods/4391c5b3-1bbe-4d20-b297-58de9373de07/volumes" Jul 6 23:28:54.720940 containerd[2001]: time="2025-07-06T23:28:54.720756511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\" id:\"d740bad25eebac383971a0bb08b2ad88720f10ababbc83eb3e04c324547b03b5\" pid:4869 exit_status:1 exited_at:{seconds:1751844534 nanos:719989243}" Jul 6 23:28:55.328773 systemd-networkd[1805]: cali62333254fd8: Gained IPv6LL Jul 6 23:28:55.352253 containerd[2001]: time="2025-07-06T23:28:55.351701130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:55.354584 containerd[2001]: time="2025-07-06T23:28:55.354505590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:28:55.357229 containerd[2001]: time="2025-07-06T23:28:55.356857914Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:55.362819 containerd[2001]: time="2025-07-06T23:28:55.362553462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:55.367393 containerd[2001]: time="2025-07-06T23:28:55.366791634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.590928868s" Jul 6 23:28:55.367393 containerd[2001]: time="2025-07-06T23:28:55.366871482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:28:55.384505 containerd[2001]: time="2025-07-06T23:28:55.384442218Z" level=info msg="CreateContainer within sandbox \"764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:28:55.407220 containerd[2001]: time="2025-07-06T23:28:55.404610366Z" level=info msg="Container 745789bd68cd788603f5adbb16014dbdc0cf1a2aff14bd1085a248169d40eee9: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:55.427943 containerd[2001]: time="2025-07-06T23:28:55.427864218Z" level=info msg="CreateContainer within sandbox \"764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"745789bd68cd788603f5adbb16014dbdc0cf1a2aff14bd1085a248169d40eee9\"" Jul 6 23:28:55.430956 containerd[2001]: time="2025-07-06T23:28:55.430814418Z" level=info msg="StartContainer for \"745789bd68cd788603f5adbb16014dbdc0cf1a2aff14bd1085a248169d40eee9\"" Jul 6 23:28:55.439649 containerd[2001]: time="2025-07-06T23:28:55.439568323Z" level=info msg="connecting to shim 745789bd68cd788603f5adbb16014dbdc0cf1a2aff14bd1085a248169d40eee9" address="unix:///run/containerd/s/5c42b6ee310866be53a859c6c1de05a7111c1c54adb4def294bbbeacdc094433" protocol=ttrpc version=3 Jul 6 23:28:55.507537 systemd[1]: Started cri-containerd-745789bd68cd788603f5adbb16014dbdc0cf1a2aff14bd1085a248169d40eee9.scope - libcontainer container 745789bd68cd788603f5adbb16014dbdc0cf1a2aff14bd1085a248169d40eee9. Jul 6 23:28:55.645548 containerd[2001]: time="2025-07-06T23:28:55.644349260Z" level=info msg="StartContainer for \"745789bd68cd788603f5adbb16014dbdc0cf1a2aff14bd1085a248169d40eee9\" returns successfully" Jul 6 23:28:55.647136 containerd[2001]: time="2025-07-06T23:28:55.647066204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:28:55.994066 containerd[2001]: time="2025-07-06T23:28:55.993980145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k846w,Uid:9146cfa4-447c-451c-906e-da79bf4ea187,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:56.316768 systemd-networkd[1805]: cali62b49b89571: Link UP Jul 6 23:28:56.317153 systemd-networkd[1805]: cali62b49b89571: Gained carrier Jul 6 23:28:56.367738 containerd[2001]: 2025-07-06 23:28:56.148 [INFO][4944] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:28:56.367738 containerd[2001]: 2025-07-06 23:28:56.179 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0 coredns-674b8bbfcf- kube-system 9146cfa4-447c-451c-906e-da79bf4ea187 869 0 2025-07-06 23:28:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-251 coredns-674b8bbfcf-k846w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali62b49b89571 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-" Jul 6 23:28:56.367738 containerd[2001]: 2025-07-06 23:28:56.179 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" Jul 6 23:28:56.367738 containerd[2001]: 2025-07-06 23:28:56.230 [INFO][4957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" HandleID="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Workload="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.230 [INFO][4957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" HandleID="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Workload="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1820), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-251", "pod":"coredns-674b8bbfcf-k846w", "timestamp":"2025-07-06 23:28:56.230233878 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.230 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.230 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.230 [INFO][4957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.247 [INFO][4957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" host="ip-172-31-19-251" Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.259 [INFO][4957] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.269 [INFO][4957] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.273 [INFO][4957] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:56.368648 containerd[2001]: 2025-07-06 23:28:56.279 [INFO][4957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:56.369358 containerd[2001]: 2025-07-06 23:28:56.279 [INFO][4957] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" host="ip-172-31-19-251" Jul 6 23:28:56.369358 containerd[2001]: 2025-07-06 23:28:56.283 [INFO][4957] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c Jul 6 23:28:56.369358 containerd[2001]: 2025-07-06 23:28:56.291 [INFO][4957] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" host="ip-172-31-19-251" Jul 6 23:28:56.369358 containerd[2001]: 2025-07-06 23:28:56.303 [INFO][4957] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" host="ip-172-31-19-251" Jul 6 23:28:56.369358 containerd[2001]: 2025-07-06 23:28:56.303 [INFO][4957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" host="ip-172-31-19-251" Jul 6 23:28:56.369358 containerd[2001]: 2025-07-06 23:28:56.303 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:56.369358 containerd[2001]: 2025-07-06 23:28:56.304 [INFO][4957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" HandleID="k8s-pod-network.48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Workload="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" Jul 6 23:28:56.369700 containerd[2001]: 2025-07-06 23:28:56.308 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9146cfa4-447c-451c-906e-da79bf4ea187", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"coredns-674b8bbfcf-k846w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62b49b89571", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:56.369700 containerd[2001]: 2025-07-06 23:28:56.310 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" Jul 6 23:28:56.369700 containerd[2001]: 2025-07-06 23:28:56.310 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62b49b89571 ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" Jul 6 23:28:56.369700 containerd[2001]: 2025-07-06 23:28:56.315 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" Jul 6 23:28:56.369700 containerd[2001]: 2025-07-06 23:28:56.317 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9146cfa4-447c-451c-906e-da79bf4ea187", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c", Pod:"coredns-674b8bbfcf-k846w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62b49b89571", MAC:"be:c0:52:ba:ee:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:56.369700 containerd[2001]: 2025-07-06 23:28:56.352 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k846w" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--k846w-eth0" Jul 6 23:28:56.413953 containerd[2001]: time="2025-07-06T23:28:56.413814079Z" level=info msg="connecting to shim 48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c" address="unix:///run/containerd/s/3580eaa15b6b61248eb2881f5bb98c90d26b0ce1792be4271ad12f1ddce4a6fe" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:56.469521 systemd[1]: Started cri-containerd-48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c.scope - libcontainer container 48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c. Jul 6 23:28:56.546595 containerd[2001]: time="2025-07-06T23:28:56.546524384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k846w,Uid:9146cfa4-447c-451c-906e-da79bf4ea187,Namespace:kube-system,Attempt:0,} returns sandbox id \"48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c\"" Jul 6 23:28:56.561966 containerd[2001]: time="2025-07-06T23:28:56.561354332Z" level=info msg="CreateContainer within sandbox \"48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:28:56.614870 containerd[2001]: time="2025-07-06T23:28:56.614743148Z" level=info msg="Container 980ef74f5fcb0e8a7e57344e6570f6a7db7b139f140b325251647b8ef696e53b: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:56.632246 containerd[2001]: time="2025-07-06T23:28:56.632089940Z" level=info msg="CreateContainer within sandbox \"48c8c47c1427f5bc72ee151b758f42ce2605fb60ef0cf62081a8e7228e35ab4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"980ef74f5fcb0e8a7e57344e6570f6a7db7b139f140b325251647b8ef696e53b\"" Jul 6 23:28:56.636209 containerd[2001]: time="2025-07-06T23:28:56.636109004Z" level=info msg="StartContainer for \"980ef74f5fcb0e8a7e57344e6570f6a7db7b139f140b325251647b8ef696e53b\"" Jul 6 23:28:56.640896 containerd[2001]: time="2025-07-06T23:28:56.640537832Z" level=info msg="connecting to shim 980ef74f5fcb0e8a7e57344e6570f6a7db7b139f140b325251647b8ef696e53b" address="unix:///run/containerd/s/3580eaa15b6b61248eb2881f5bb98c90d26b0ce1792be4271ad12f1ddce4a6fe" protocol=ttrpc version=3 Jul 6 23:28:56.690808 systemd[1]: Started cri-containerd-980ef74f5fcb0e8a7e57344e6570f6a7db7b139f140b325251647b8ef696e53b.scope - libcontainer container 980ef74f5fcb0e8a7e57344e6570f6a7db7b139f140b325251647b8ef696e53b. Jul 6 23:28:56.857662 containerd[2001]: time="2025-07-06T23:28:56.857492698Z" level=info msg="StartContainer for \"980ef74f5fcb0e8a7e57344e6570f6a7db7b139f140b325251647b8ef696e53b\" returns successfully" Jul 6 23:28:56.988767 containerd[2001]: time="2025-07-06T23:28:56.987719038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nmvf,Uid:94edf3c8-6c21-47d9-9a93-a87b343dac2f,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:57.485932 systemd[1]: Started sshd@7-172.31.19.251:22-139.178.89.65:53054.service - OpenSSH per-connection server daemon (139.178.89.65:53054). Jul 6 23:28:57.515230 kubelet[3305]: I0706 23:28:57.514634 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k846w" podStartSLOduration=48.514608717 podStartE2EDuration="48.514608717s" podCreationTimestamp="2025-07-06 23:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:57.443915588 +0000 UTC m=+53.811878812" watchObservedRunningTime="2025-07-06 23:28:57.514608717 +0000 UTC m=+53.882571953" Jul 6 23:28:57.632319 systemd-networkd[1805]: cali8d731bd5a89: Link UP Jul 6 23:28:57.634517 systemd-networkd[1805]: cali8d731bd5a89: Gained carrier Jul 6 23:28:57.696404 systemd-networkd[1805]: cali62b49b89571: Gained IPv6LL Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.110 [INFO][5056] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.147 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0 csi-node-driver- calico-system 94edf3c8-6c21-47d9-9a93-a87b343dac2f 722 0 2025-07-06 23:28:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-251 csi-node-driver-5nmvf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8d731bd5a89 [] [] }} ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.147 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.306 [INFO][5077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" HandleID="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Workload="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.309 [INFO][5077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" HandleID="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Workload="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d550), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-251", "pod":"csi-node-driver-5nmvf", "timestamp":"2025-07-06 23:28:57.306754292 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.309 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.310 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.310 [INFO][5077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.380 [INFO][5077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.469 [INFO][5077] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.510 [INFO][5077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.535 [INFO][5077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.544 [INFO][5077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.546 [INFO][5077] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.554 [INFO][5077] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02 Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.570 [INFO][5077] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.600 [INFO][5077] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.602 [INFO][5077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" host="ip-172-31-19-251" Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.602 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:57.708980 containerd[2001]: 2025-07-06 23:28:57.602 [INFO][5077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" HandleID="k8s-pod-network.11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Workload="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" Jul 6 23:28:57.712809 containerd[2001]: 2025-07-06 23:28:57.618 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94edf3c8-6c21-47d9-9a93-a87b343dac2f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"csi-node-driver-5nmvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d731bd5a89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:57.712809 containerd[2001]: 2025-07-06 23:28:57.619 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" Jul 6 23:28:57.712809 containerd[2001]: 2025-07-06 23:28:57.619 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d731bd5a89 ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" Jul 6 23:28:57.712809 containerd[2001]: 2025-07-06 23:28:57.638 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" Jul 6 23:28:57.712809 containerd[2001]: 2025-07-06 23:28:57.642 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94edf3c8-6c21-47d9-9a93-a87b343dac2f", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02", Pod:"csi-node-driver-5nmvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d731bd5a89", MAC:"d6:8b:f1:6d:f9:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:57.712809 containerd[2001]: 2025-07-06 23:28:57.701 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" Namespace="calico-system" Pod="csi-node-driver-5nmvf" WorkloadEndpoint="ip--172--31--19--251-k8s-csi--node--driver--5nmvf-eth0" Jul 6 23:28:57.775519 sshd[5088]: Accepted publickey for core from 139.178.89.65 port 53054 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:28:57.787844 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:57.808219 systemd-logind[1977]: New session 8 of user core. Jul 6 23:28:57.816561 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:28:57.836451 containerd[2001]: time="2025-07-06T23:28:57.836361274Z" level=info msg="connecting to shim 11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02" address="unix:///run/containerd/s/0f71ceccab82092bdcdb7c7c24f46bccc68d515ea45c21a3c5f34ecb7a264027" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:57.984552 containerd[2001]: time="2025-07-06T23:28:57.984044711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dxdt2,Uid:1b168d18-462c-4fdf-bcac-01c36756c318,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:57.989241 containerd[2001]: time="2025-07-06T23:28:57.986897507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4c5f4bd-4pgc4,Uid:078364f1-485e-48a1-bf69-3a39df214c85,Namespace:calico-system,Attempt:0,}" Jul 6 23:28:57.992163 containerd[2001]: time="2025-07-06T23:28:57.989578583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-pgz5t,Uid:96c13034-9cd7-46a4-9bf2-d29d22006954,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:58.079605 systemd[1]: Started cri-containerd-11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02.scope - libcontainer container 11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02. Jul 6 23:28:58.162087 kubelet[3305]: I0706 23:28:58.162015 3305 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:28:58.439894 sshd[5113]: Connection closed by 139.178.89.65 port 53054 Jul 6 23:28:58.441518 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:58.455823 systemd[1]: sshd@7-172.31.19.251:22-139.178.89.65:53054.service: Deactivated successfully. Jul 6 23:28:58.462098 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:28:58.481108 systemd-logind[1977]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:28:58.488725 systemd-logind[1977]: Removed session 8. Jul 6 23:28:58.989099 containerd[2001]: time="2025-07-06T23:28:58.987319236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-cbzd2,Uid:0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:28:59.152886 containerd[2001]: time="2025-07-06T23:28:59.152792301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nmvf,Uid:94edf3c8-6c21-47d9-9a93-a87b343dac2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02\"" Jul 6 23:28:59.237291 systemd-networkd[1805]: cali2120f2eadd6: Link UP Jul 6 23:28:59.242497 systemd-networkd[1805]: cali2120f2eadd6: Gained carrier Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.426 [INFO][5158] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.602 [INFO][5158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0 calico-kube-controllers-cd4c5f4bd- calico-system 078364f1-485e-48a1-bf69-3a39df214c85 872 0 2025-07-06 23:28:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd4c5f4bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-251 calico-kube-controllers-cd4c5f4bd-4pgc4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2120f2eadd6 [] [] }} ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.603 [INFO][5158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.830 [INFO][5202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" HandleID="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Workload="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.835 [INFO][5202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" HandleID="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Workload="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400026d990), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-251", "pod":"calico-kube-controllers-cd4c5f4bd-4pgc4", "timestamp":"2025-07-06 23:28:58.830601023 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.835 [INFO][5202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.836 [INFO][5202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.836 [INFO][5202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.895 [INFO][5202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:58.950 [INFO][5202] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.041 [INFO][5202] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.082 [INFO][5202] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.104 [INFO][5202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.104 [INFO][5202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.111 [INFO][5202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35 Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.125 [INFO][5202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.168 [INFO][5202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.169 [INFO][5202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" host="ip-172-31-19-251" Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.170 [INFO][5202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:59.316722 containerd[2001]: 2025-07-06 23:28:59.172 [INFO][5202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" HandleID="k8s-pod-network.91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Workload="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" Jul 6 23:28:59.317978 containerd[2001]: 2025-07-06 23:28:59.187 [INFO][5158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0", GenerateName:"calico-kube-controllers-cd4c5f4bd-", Namespace:"calico-system", SelfLink:"", UID:"078364f1-485e-48a1-bf69-3a39df214c85", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd4c5f4bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"calico-kube-controllers-cd4c5f4bd-4pgc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2120f2eadd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:59.317978 containerd[2001]: 2025-07-06 23:28:59.187 [INFO][5158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" Jul 6 23:28:59.317978 containerd[2001]: 2025-07-06 23:28:59.187 [INFO][5158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2120f2eadd6 ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" Jul 6 23:28:59.317978 containerd[2001]: 2025-07-06 23:28:59.249 [INFO][5158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" Jul 6 23:28:59.317978 containerd[2001]: 2025-07-06 23:28:59.255 [INFO][5158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0", GenerateName:"calico-kube-controllers-cd4c5f4bd-", Namespace:"calico-system", SelfLink:"", UID:"078364f1-485e-48a1-bf69-3a39df214c85", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd4c5f4bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35", Pod:"calico-kube-controllers-cd4c5f4bd-4pgc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2120f2eadd6", MAC:"62:f0:0e:26:64:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:59.317978 containerd[2001]: 2025-07-06 23:28:59.303 [INFO][5158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" Namespace="calico-system" Pod="calico-kube-controllers-cd4c5f4bd-4pgc4" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--kube--controllers--cd4c5f4bd--4pgc4-eth0" Jul 6 23:28:59.489841 systemd-networkd[1805]: calid919b8b0a53: Link UP Jul 6 23:28:59.493217 containerd[2001]: time="2025-07-06T23:28:59.491765759Z" level=info msg="connecting to shim 91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35" address="unix:///run/containerd/s/02e0cb329f1804616e14f682a5fd13759bd03c9f0da5b4ed92d8cca8c394482f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:59.498254 systemd-networkd[1805]: calid919b8b0a53: Gained carrier Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:58.462 [INFO][5162] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:58.624 [INFO][5162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0 calico-apiserver-6bffc59bcd- calico-apiserver 96c13034-9cd7-46a4-9bf2-d29d22006954 874 0 2025-07-06 23:28:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bffc59bcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-251 calico-apiserver-6bffc59bcd-pgz5t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid919b8b0a53 [] [] }} ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:58.625 [INFO][5162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:58.923 [INFO][5210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" HandleID="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Workload="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:58.924 [INFO][5210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" HandleID="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Workload="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031c6a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-251", "pod":"calico-apiserver-6bffc59bcd-pgz5t", "timestamp":"2025-07-06 23:28:58.923592228 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:58.924 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.170 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.171 [INFO][5210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.265 [INFO][5210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.306 [INFO][5210] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.326 [INFO][5210] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.336 [INFO][5210] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.346 [INFO][5210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.346 [INFO][5210] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.354 [INFO][5210] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4 Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.375 [INFO][5210] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.426 [INFO][5210] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.427 [INFO][5210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" host="ip-172-31-19-251" Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.428 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:59.566736 containerd[2001]: 2025-07-06 23:28:59.430 [INFO][5210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" HandleID="k8s-pod-network.493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Workload="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" Jul 6 23:28:59.569628 containerd[2001]: 2025-07-06 23:28:59.452 [INFO][5162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0", GenerateName:"calico-apiserver-6bffc59bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"96c13034-9cd7-46a4-9bf2-d29d22006954", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffc59bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"calico-apiserver-6bffc59bcd-pgz5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid919b8b0a53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:59.569628 containerd[2001]: 2025-07-06 23:28:59.454 [INFO][5162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" Jul 6 23:28:59.569628 containerd[2001]: 2025-07-06 23:28:59.458 [INFO][5162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid919b8b0a53 ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" Jul 6 23:28:59.569628 containerd[2001]: 2025-07-06 23:28:59.507 [INFO][5162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" Jul 6 23:28:59.569628 containerd[2001]: 2025-07-06 23:28:59.514 [INFO][5162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0", GenerateName:"calico-apiserver-6bffc59bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"96c13034-9cd7-46a4-9bf2-d29d22006954", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffc59bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4", Pod:"calico-apiserver-6bffc59bcd-pgz5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid919b8b0a53", MAC:"8a:9b:40:0d:34:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:59.569628 containerd[2001]: 2025-07-06 23:28:59.555 [INFO][5162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-pgz5t" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--pgz5t-eth0" Jul 6 23:28:59.681373 systemd-networkd[1805]: cali8d731bd5a89: Gained IPv6LL Jul 6 23:28:59.690645 systemd[1]: Started cri-containerd-91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35.scope - libcontainer container 91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35. Jul 6 23:28:59.789131 containerd[2001]: time="2025-07-06T23:28:59.787078680Z" level=info msg="connecting to shim 493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4" address="unix:///run/containerd/s/d37ca7e2d5080a17691c446505f6d92b1f141783c7ea85d48b7d445f7fee466a" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:59.827470 systemd-networkd[1805]: calib156ef142a7: Link UP Jul 6 23:28:59.832486 systemd-networkd[1805]: calib156ef142a7: Gained carrier Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:58.380 [INFO][5152] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:58.576 [INFO][5152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0 goldmane-768f4c5c69- calico-system 1b168d18-462c-4fdf-bcac-01c36756c318 871 0 2025-07-06 23:28:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-251 goldmane-768f4c5c69-dxdt2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib156ef142a7 [] [] }} ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:58.576 [INFO][5152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:58.925 [INFO][5200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" HandleID="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Workload="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:58.926 [INFO][5200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" HandleID="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Workload="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102c30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-251", "pod":"goldmane-768f4c5c69-dxdt2", "timestamp":"2025-07-06 23:28:58.92538954 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:58.926 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.428 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.429 [INFO][5200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.523 [INFO][5200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.561 [INFO][5200] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.595 [INFO][5200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.605 [INFO][5200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.626 [INFO][5200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.627 [INFO][5200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.653 [INFO][5200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.707 [INFO][5200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.760 [INFO][5200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.760 [INFO][5200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" host="ip-172-31-19-251" Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.761 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:28:59.930229 containerd[2001]: 2025-07-06 23:28:59.761 [INFO][5200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" HandleID="k8s-pod-network.f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Workload="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" Jul 6 23:28:59.932478 containerd[2001]: 2025-07-06 23:28:59.789 [INFO][5152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1b168d18-462c-4fdf-bcac-01c36756c318", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"goldmane-768f4c5c69-dxdt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib156ef142a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:59.932478 containerd[2001]: 2025-07-06 23:28:59.789 [INFO][5152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" Jul 6 23:28:59.932478 containerd[2001]: 2025-07-06 23:28:59.789 [INFO][5152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib156ef142a7 ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" Jul 6 23:28:59.932478 containerd[2001]: 2025-07-06 23:28:59.850 [INFO][5152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" Jul 6 23:28:59.932478 containerd[2001]: 2025-07-06 23:28:59.854 [INFO][5152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1b168d18-462c-4fdf-bcac-01c36756c318", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec", Pod:"goldmane-768f4c5c69-dxdt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib156ef142a7", MAC:"ba:ae:df:a4:24:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:28:59.932478 containerd[2001]: 2025-07-06 23:28:59.912 [INFO][5152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" Namespace="calico-system" Pod="goldmane-768f4c5c69-dxdt2" WorkloadEndpoint="ip--172--31--19--251-k8s-goldmane--768f4c5c69--dxdt2-eth0" Jul 6 23:29:00.027802 systemd-networkd[1805]: calia8ef3db9302: Link UP Jul 6 23:29:00.033964 systemd-networkd[1805]: calia8ef3db9302: Gained carrier Jul 6 23:29:00.093516 systemd[1]: Started cri-containerd-493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4.scope - libcontainer container 493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4. Jul 6 23:29:00.134700 containerd[2001]: time="2025-07-06T23:29:00.134591950Z" level=info msg="connecting to shim f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec" address="unix:///run/containerd/s/4f0538eff55bfb23ffb89b457771a6ca05c6a11219c7fbbd5e9e2901cf9612e7" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.209 [INFO][5227] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.310 [INFO][5227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0 calico-apiserver-6bffc59bcd- calico-apiserver 0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac 875 0 2025-07-06 23:28:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bffc59bcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-251 calico-apiserver-6bffc59bcd-cbzd2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia8ef3db9302 [] [] }} ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.313 [INFO][5227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.638 [INFO][5257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" HandleID="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Workload="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.638 [INFO][5257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" HandleID="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Workload="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-251", "pod":"calico-apiserver-6bffc59bcd-cbzd2", "timestamp":"2025-07-06 23:28:59.638587703 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.639 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.761 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.761 [INFO][5257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.798 [INFO][5257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.840 [INFO][5257] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.873 [INFO][5257] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.887 [INFO][5257] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.900 [INFO][5257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.904 [INFO][5257] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.918 [INFO][5257] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.933 [INFO][5257] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.958 [INFO][5257] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.962 [INFO][5257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" host="ip-172-31-19-251" Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.964 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:00.152153 containerd[2001]: 2025-07-06 23:28:59.964 [INFO][5257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" HandleID="k8s-pod-network.0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Workload="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" Jul 6 23:29:00.154333 containerd[2001]: 2025-07-06 23:28:59.989 [INFO][5227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0", GenerateName:"calico-apiserver-6bffc59bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffc59bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"calico-apiserver-6bffc59bcd-cbzd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8ef3db9302", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:00.154333 containerd[2001]: 2025-07-06 23:28:59.991 [INFO][5227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" Jul 6 23:29:00.154333 containerd[2001]: 2025-07-06 23:28:59.993 [INFO][5227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8ef3db9302 ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" Jul 6 23:29:00.154333 containerd[2001]: 2025-07-06 23:29:00.065 [INFO][5227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" Jul 6 23:29:00.154333 containerd[2001]: 2025-07-06 23:29:00.085 [INFO][5227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0", GenerateName:"calico-apiserver-6bffc59bcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bffc59bcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c", Pod:"calico-apiserver-6bffc59bcd-cbzd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8ef3db9302", MAC:"d6:93:3a:13:a8:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:00.154333 containerd[2001]: 2025-07-06 23:29:00.123 [INFO][5227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" Namespace="calico-apiserver" Pod="calico-apiserver-6bffc59bcd-cbzd2" WorkloadEndpoint="ip--172--31--19--251-k8s-calico--apiserver--6bffc59bcd--cbzd2-eth0" Jul 6 23:29:00.321443 systemd-networkd[1805]: cali2120f2eadd6: Gained IPv6LL Jul 6 23:29:00.380522 systemd[1]: Started cri-containerd-f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec.scope - libcontainer container f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec. Jul 6 23:29:00.416609 containerd[2001]: time="2025-07-06T23:29:00.416409695Z" level=info msg="connecting to shim 0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c" address="unix:///run/containerd/s/74249dd8dd86b791e6c6fb7add228e2618366e9a54d533cfd1a3afa750623a12" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:00.520294 containerd[2001]: time="2025-07-06T23:29:00.520072872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4c5f4bd-4pgc4,Uid:078364f1-485e-48a1-bf69-3a39df214c85,Namespace:calico-system,Attempt:0,} returns sandbox id \"91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35\"" Jul 6 23:29:00.633567 systemd[1]: Started cri-containerd-0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c.scope - libcontainer container 0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c. Jul 6 23:29:00.706362 systemd-networkd[1805]: calid919b8b0a53: Gained IPv6LL Jul 6 23:29:00.935410 containerd[2001]: time="2025-07-06T23:29:00.934948694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-pgz5t,Uid:96c13034-9cd7-46a4-9bf2-d29d22006954,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4\"" Jul 6 23:29:00.984594 containerd[2001]: time="2025-07-06T23:29:00.984516242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-blcpd,Uid:c2a48b47-c513-49cb-b4ab-a214c8933d5d,Namespace:kube-system,Attempt:0,}" Jul 6 23:29:01.080945 containerd[2001]: time="2025-07-06T23:29:01.080870579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bffc59bcd-cbzd2,Uid:0e91b1cd-dd06-4ca1-a59b-5b9bd40fbcac,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c\"" Jul 6 23:29:01.263401 containerd[2001]: time="2025-07-06T23:29:01.262835339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dxdt2,Uid:1b168d18-462c-4fdf-bcac-01c36756c318,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec\"" Jul 6 23:29:01.339981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013641797.mount: Deactivated successfully. Jul 6 23:29:01.403159 containerd[2001]: time="2025-07-06T23:29:01.403093080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:01.408284 containerd[2001]: time="2025-07-06T23:29:01.407635632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:29:01.421076 containerd[2001]: time="2025-07-06T23:29:01.420766392Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:01.449412 containerd[2001]: time="2025-07-06T23:29:01.449337312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:01.464470 containerd[2001]: time="2025-07-06T23:29:01.463688352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 5.81654506s" Jul 6 23:29:01.464470 containerd[2001]: time="2025-07-06T23:29:01.463756956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:29:01.469457 containerd[2001]: time="2025-07-06T23:29:01.469343400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:29:01.481572 containerd[2001]: time="2025-07-06T23:29:01.481174141Z" level=info msg="CreateContainer within sandbox \"764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:29:01.513391 containerd[2001]: time="2025-07-06T23:29:01.510996313Z" level=info msg="Container 4127dccf7460e00126d8c6bb60c8f8a4150e86f1e7cd0b4126a6d676bb5f649e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:01.533234 containerd[2001]: time="2025-07-06T23:29:01.532299925Z" level=info msg="CreateContainer within sandbox \"764c39aa0bac9c77b52df934657a3eba14e7ed93378b3bf36745dba193e762b0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4127dccf7460e00126d8c6bb60c8f8a4150e86f1e7cd0b4126a6d676bb5f649e\"" Jul 6 23:29:01.535443 containerd[2001]: time="2025-07-06T23:29:01.535345057Z" level=info msg="StartContainer for \"4127dccf7460e00126d8c6bb60c8f8a4150e86f1e7cd0b4126a6d676bb5f649e\"" Jul 6 23:29:01.542629 containerd[2001]: time="2025-07-06T23:29:01.542551117Z" level=info msg="connecting to shim 4127dccf7460e00126d8c6bb60c8f8a4150e86f1e7cd0b4126a6d676bb5f649e" address="unix:///run/containerd/s/5c42b6ee310866be53a859c6c1de05a7111c1c54adb4def294bbbeacdc094433" protocol=ttrpc version=3 Jul 6 23:29:01.594750 systemd[1]: Started cri-containerd-4127dccf7460e00126d8c6bb60c8f8a4150e86f1e7cd0b4126a6d676bb5f649e.scope - libcontainer container 4127dccf7460e00126d8c6bb60c8f8a4150e86f1e7cd0b4126a6d676bb5f649e. Jul 6 23:29:01.613395 systemd-networkd[1805]: cali8032c398031: Link UP Jul 6 23:29:01.617233 systemd-networkd[1805]: cali8032c398031: Gained carrier Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.218 [INFO][5473] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.299 [INFO][5473] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0 coredns-674b8bbfcf- kube-system c2a48b47-c513-49cb-b4ab-a214c8933d5d 870 0 2025-07-06 23:28:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-251 coredns-674b8bbfcf-blcpd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8032c398031 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.299 [INFO][5473] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.396 [INFO][5509] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" HandleID="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Workload="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.397 [INFO][5509] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" HandleID="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Workload="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cbe40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-251", "pod":"coredns-674b8bbfcf-blcpd", "timestamp":"2025-07-06 23:29:01.396771276 +0000 UTC"}, Hostname:"ip-172-31-19-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.397 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.398 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.398 [INFO][5509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-251' Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.438 [INFO][5509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.454 [INFO][5509] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.482 [INFO][5509] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.494 [INFO][5509] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.501 [INFO][5509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.501 [INFO][5509] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.505 [INFO][5509] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6 Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.536 [INFO][5509] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.566 [INFO][5509] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.568 [INFO][5509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" host="ip-172-31-19-251" Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.568 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:29:01.653648 containerd[2001]: 2025-07-06 23:29:01.568 [INFO][5509] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" HandleID="k8s-pod-network.328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Workload="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" Jul 6 23:29:01.659566 containerd[2001]: 2025-07-06 23:29:01.579 [INFO][5473] cni-plugin/k8s.go 418: Populated endpoint ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c2a48b47-c513-49cb-b4ab-a214c8933d5d", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"", Pod:"coredns-674b8bbfcf-blcpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8032c398031", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:01.659566 containerd[2001]: 2025-07-06 23:29:01.581 [INFO][5473] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" Jul 6 23:29:01.659566 containerd[2001]: 2025-07-06 23:29:01.581 [INFO][5473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8032c398031 ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" Jul 6 23:29:01.659566 containerd[2001]: 2025-07-06 23:29:01.621 [INFO][5473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" Jul 6 23:29:01.659566 containerd[2001]: 2025-07-06 23:29:01.621 [INFO][5473] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c2a48b47-c513-49cb-b4ab-a214c8933d5d", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-251", ContainerID:"328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6", Pod:"coredns-674b8bbfcf-blcpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8032c398031", MAC:"7a:dc:d8:81:8f:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:29:01.659566 containerd[2001]: 2025-07-06 23:29:01.648 [INFO][5473] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-blcpd" WorkloadEndpoint="ip--172--31--19--251-k8s-coredns--674b8bbfcf--blcpd-eth0" Jul 6 23:29:01.664653 systemd-networkd[1805]: calib156ef142a7: Gained IPv6LL Jul 6 23:29:01.721523 containerd[2001]: time="2025-07-06T23:29:01.721329518Z" level=info msg="connecting to shim 328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6" address="unix:///run/containerd/s/50a8fcc76444a36a7a83a9b7c53bebe32dedee533b9a99ac19b16136c06caa12" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:01.792393 systemd-networkd[1805]: calia8ef3db9302: Gained IPv6LL Jul 6 23:29:01.816784 systemd[1]: Started cri-containerd-328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6.scope - libcontainer container 328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6. Jul 6 23:29:01.980308 containerd[2001]: time="2025-07-06T23:29:01.980052027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-blcpd,Uid:c2a48b47-c513-49cb-b4ab-a214c8933d5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6\"" Jul 6 23:29:01.996764 containerd[2001]: time="2025-07-06T23:29:01.996686835Z" level=info msg="CreateContainer within sandbox \"328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:29:02.031465 containerd[2001]: time="2025-07-06T23:29:02.030551147Z" level=info msg="Container 3113e3e682a4edd0290948f19ec12ae6a9c06cdd8b03351d2329a92185ad4d70: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:02.058977 containerd[2001]: time="2025-07-06T23:29:02.058388867Z" level=info msg="CreateContainer within sandbox \"328cd554d96addb83449d637fa1c9319158847ad46ae5abdab05c59189b183e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3113e3e682a4edd0290948f19ec12ae6a9c06cdd8b03351d2329a92185ad4d70\"" Jul 6 23:29:02.062204 containerd[2001]: time="2025-07-06T23:29:02.061930667Z" level=info msg="StartContainer for \"3113e3e682a4edd0290948f19ec12ae6a9c06cdd8b03351d2329a92185ad4d70\"" Jul 6 23:29:02.068720 containerd[2001]: time="2025-07-06T23:29:02.068644811Z" level=info msg="connecting to shim 3113e3e682a4edd0290948f19ec12ae6a9c06cdd8b03351d2329a92185ad4d70" address="unix:///run/containerd/s/50a8fcc76444a36a7a83a9b7c53bebe32dedee533b9a99ac19b16136c06caa12" protocol=ttrpc version=3 Jul 6 23:29:02.109430 containerd[2001]: time="2025-07-06T23:29:02.109349928Z" level=info msg="StartContainer for \"4127dccf7460e00126d8c6bb60c8f8a4150e86f1e7cd0b4126a6d676bb5f649e\" returns successfully" Jul 6 23:29:02.169813 systemd[1]: Started cri-containerd-3113e3e682a4edd0290948f19ec12ae6a9c06cdd8b03351d2329a92185ad4d70.scope - libcontainer container 3113e3e682a4edd0290948f19ec12ae6a9c06cdd8b03351d2329a92185ad4d70. Jul 6 23:29:02.274105 containerd[2001]: time="2025-07-06T23:29:02.274023264Z" level=info msg="StartContainer for \"3113e3e682a4edd0290948f19ec12ae6a9c06cdd8b03351d2329a92185ad4d70\" returns successfully" Jul 6 23:29:02.521225 kubelet[3305]: I0706 23:29:02.520004 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-blcpd" podStartSLOduration=53.519979334 podStartE2EDuration="53.519979334s" podCreationTimestamp="2025-07-06 23:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:29:02.51878831 +0000 UTC m=+58.886751630" watchObservedRunningTime="2025-07-06 23:29:02.519979334 +0000 UTC m=+58.887942558" Jul 6 23:29:02.549441 kubelet[3305]: I0706 23:29:02.548912 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dc9bbdfff-29tsm" podStartSLOduration=2.85351556 podStartE2EDuration="10.548888222s" podCreationTimestamp="2025-07-06 23:28:52 +0000 UTC" firstStartedPulling="2025-07-06 23:28:53.77259687 +0000 UTC m=+50.140560106" lastFinishedPulling="2025-07-06 23:29:01.467969448 +0000 UTC m=+57.835932768" observedRunningTime="2025-07-06 23:29:02.548428814 +0000 UTC m=+58.916392062" watchObservedRunningTime="2025-07-06 23:29:02.548888222 +0000 UTC m=+58.916851458" Jul 6 23:29:02.800488 systemd-networkd[1805]: vxlan.calico: Link UP Jul 6 23:29:02.801014 (udev-worker)[4650]: Network interface NamePolicy= disabled on kernel command line. Jul 6 23:29:02.802522 systemd-networkd[1805]: vxlan.calico: Gained carrier Jul 6 23:29:02.881419 systemd-networkd[1805]: cali8032c398031: Gained IPv6LL Jul 6 23:29:03.076136 containerd[2001]: time="2025-07-06T23:29:03.075617976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:03.080359 containerd[2001]: time="2025-07-06T23:29:03.080297472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:29:03.082967 containerd[2001]: time="2025-07-06T23:29:03.082886580Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:03.088861 containerd[2001]: time="2025-07-06T23:29:03.088736665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:03.091574 containerd[2001]: time="2025-07-06T23:29:03.091398049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.621964013s" Jul 6 23:29:03.091574 containerd[2001]: time="2025-07-06T23:29:03.091452541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:29:03.095110 containerd[2001]: time="2025-07-06T23:29:03.095036425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:29:03.102308 containerd[2001]: time="2025-07-06T23:29:03.102116905Z" level=info msg="CreateContainer within sandbox \"11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:29:03.134212 containerd[2001]: time="2025-07-06T23:29:03.132004321Z" level=info msg="Container 15c5cbcbd12032761727ec3788b7cc78ffe5fdb413b3c0dc746e47f8a62ae965: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:03.158965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502252901.mount: Deactivated successfully. Jul 6 23:29:03.175112 containerd[2001]: time="2025-07-06T23:29:03.174999289Z" level=info msg="CreateContainer within sandbox \"11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"15c5cbcbd12032761727ec3788b7cc78ffe5fdb413b3c0dc746e47f8a62ae965\"" Jul 6 23:29:03.178378 containerd[2001]: time="2025-07-06T23:29:03.178251517Z" level=info msg="StartContainer for \"15c5cbcbd12032761727ec3788b7cc78ffe5fdb413b3c0dc746e47f8a62ae965\"" Jul 6 23:29:03.187435 containerd[2001]: time="2025-07-06T23:29:03.187378489Z" level=info msg="connecting to shim 15c5cbcbd12032761727ec3788b7cc78ffe5fdb413b3c0dc746e47f8a62ae965" address="unix:///run/containerd/s/0f71ceccab82092bdcdb7c7c24f46bccc68d515ea45c21a3c5f34ecb7a264027" protocol=ttrpc version=3 Jul 6 23:29:03.251815 systemd[1]: Started cri-containerd-15c5cbcbd12032761727ec3788b7cc78ffe5fdb413b3c0dc746e47f8a62ae965.scope - libcontainer container 15c5cbcbd12032761727ec3788b7cc78ffe5fdb413b3c0dc746e47f8a62ae965. Jul 6 23:29:03.401158 containerd[2001]: time="2025-07-06T23:29:03.398718398Z" level=info msg="StartContainer for \"15c5cbcbd12032761727ec3788b7cc78ffe5fdb413b3c0dc746e47f8a62ae965\" returns successfully" Jul 6 23:29:03.480655 systemd[1]: Started sshd@8-172.31.19.251:22-139.178.89.65:56318.service - OpenSSH per-connection server daemon (139.178.89.65:56318). Jul 6 23:29:03.730477 sshd[5770]: Accepted publickey for core from 139.178.89.65 port 56318 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:03.735353 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:03.751690 systemd-logind[1977]: New session 9 of user core. Jul 6 23:29:03.758860 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:29:04.061294 sshd[5779]: Connection closed by 139.178.89.65 port 56318 Jul 6 23:29:04.060362 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:04.075373 systemd[1]: sshd@8-172.31.19.251:22-139.178.89.65:56318.service: Deactivated successfully. Jul 6 23:29:04.085444 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:29:04.089030 systemd-logind[1977]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:29:04.092697 systemd-logind[1977]: Removed session 9. Jul 6 23:29:04.417398 systemd-networkd[1805]: vxlan.calico: Gained IPv6LL Jul 6 23:29:05.621723 containerd[2001]: time="2025-07-06T23:29:05.621667673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:05.624027 containerd[2001]: time="2025-07-06T23:29:05.623950025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:29:05.626237 containerd[2001]: time="2025-07-06T23:29:05.626137013Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:05.635087 containerd[2001]: time="2025-07-06T23:29:05.635017097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:05.637319 containerd[2001]: time="2025-07-06T23:29:05.636499325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.541394308s" Jul 6 23:29:05.637319 containerd[2001]: time="2025-07-06T23:29:05.636557645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:29:05.639205 containerd[2001]: time="2025-07-06T23:29:05.639111437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:29:05.679717 containerd[2001]: time="2025-07-06T23:29:05.679556381Z" level=info msg="CreateContainer within sandbox \"91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:29:05.699657 containerd[2001]: time="2025-07-06T23:29:05.699547973Z" level=info msg="Container b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:05.714821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570645139.mount: Deactivated successfully. Jul 6 23:29:05.725562 containerd[2001]: time="2025-07-06T23:29:05.725362722Z" level=info msg="CreateContainer within sandbox \"91603119b937ab30b8c5a74f8dcdf61d1e4d66d708795abd1d6a7949eb404d35\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\"" Jul 6 23:29:05.728375 containerd[2001]: time="2025-07-06T23:29:05.728171166Z" level=info msg="StartContainer for \"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\"" Jul 6 23:29:05.732310 containerd[2001]: time="2025-07-06T23:29:05.732224070Z" level=info msg="connecting to shim b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e" address="unix:///run/containerd/s/02e0cb329f1804616e14f682a5fd13759bd03c9f0da5b4ed92d8cca8c394482f" protocol=ttrpc version=3 Jul 6 23:29:05.803890 systemd[1]: Started cri-containerd-b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e.scope - libcontainer container b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e. Jul 6 23:29:05.915309 containerd[2001]: time="2025-07-06T23:29:05.915263035Z" level=info msg="StartContainer for \"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\" returns successfully" Jul 6 23:29:06.470832 ntpd[1970]: Listen normally on 7 vxlan.calico 192.168.88.128:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 7 vxlan.calico 192.168.88.128:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 8 cali62333254fd8 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 9 cali62b49b89571 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 10 cali8d731bd5a89 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 11 cali2120f2eadd6 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 12 calid919b8b0a53 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 13 calib156ef142a7 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 14 calia8ef3db9302 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 15 cali8032c398031 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:29:06.473400 ntpd[1970]: 6 Jul 23:29:06 ntpd[1970]: Listen normally on 16 vxlan.calico [fe80::6452:81ff:fe68:f46d%12]:123 Jul 6 23:29:06.471429 ntpd[1970]: Listen normally on 8 cali62333254fd8 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 6 23:29:06.471554 ntpd[1970]: Listen normally on 9 cali62b49b89571 [fe80::ecee:eeff:feee:eeee%5]:123 Jul 6 23:29:06.471630 ntpd[1970]: Listen normally on 10 cali8d731bd5a89 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 6 23:29:06.472258 ntpd[1970]: Listen normally on 11 cali2120f2eadd6 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 6 23:29:06.472576 ntpd[1970]: Listen normally on 12 calid919b8b0a53 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 6 23:29:06.472648 ntpd[1970]: Listen normally on 13 calib156ef142a7 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 6 23:29:06.472714 ntpd[1970]: Listen normally on 14 calia8ef3db9302 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 6 23:29:06.472788 ntpd[1970]: Listen normally on 15 cali8032c398031 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 6 23:29:06.472854 ntpd[1970]: Listen normally on 16 vxlan.calico [fe80::6452:81ff:fe68:f46d%12]:123 Jul 6 23:29:06.627006 containerd[2001]: time="2025-07-06T23:29:06.626929086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\" id:\"4fe9aa300160a437f9891b62ee0d013b065a908d1786d6f186d14f1d774b9f7d\" pid:5853 exited_at:{seconds:1751844546 nanos:626322738}" Jul 6 23:29:06.657230 kubelet[3305]: I0706 23:29:06.657118 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cd4c5f4bd-4pgc4" podStartSLOduration=28.547899313 podStartE2EDuration="33.657095994s" podCreationTimestamp="2025-07-06 23:28:33 +0000 UTC" firstStartedPulling="2025-07-06 23:29:00.528995664 +0000 UTC m=+56.896958888" lastFinishedPulling="2025-07-06 23:29:05.638192261 +0000 UTC m=+62.006155569" observedRunningTime="2025-07-06 23:29:06.580542846 +0000 UTC m=+62.948506178" watchObservedRunningTime="2025-07-06 23:29:06.657095994 +0000 UTC m=+63.025059230" Jul 6 23:29:09.104770 systemd[1]: Started sshd@9-172.31.19.251:22-139.178.89.65:56322.service - OpenSSH per-connection server daemon (139.178.89.65:56322). Jul 6 23:29:09.255533 containerd[2001]: time="2025-07-06T23:29:09.255325735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:09.257580 containerd[2001]: time="2025-07-06T23:29:09.257497171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:29:09.260141 containerd[2001]: time="2025-07-06T23:29:09.260059027Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:09.264772 containerd[2001]: time="2025-07-06T23:29:09.264642847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:09.266408 containerd[2001]: time="2025-07-06T23:29:09.266003755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.62676789s" Jul 6 23:29:09.266408 containerd[2001]: time="2025-07-06T23:29:09.266074099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:29:09.269429 containerd[2001]: time="2025-07-06T23:29:09.269071135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:29:09.276588 containerd[2001]: time="2025-07-06T23:29:09.276525511Z" level=info msg="CreateContainer within sandbox \"493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:29:09.296839 containerd[2001]: time="2025-07-06T23:29:09.296781955Z" level=info msg="Container b998a0c83b4677ea6313909ed8e40a145666a401cd869a1bbcc0602cb4977155: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:09.318203 containerd[2001]: time="2025-07-06T23:29:09.318048355Z" level=info msg="CreateContainer within sandbox \"493d29262a4ceec5c58110936136611c20a3c7c0b1bbe2583f09cf36caa448f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b998a0c83b4677ea6313909ed8e40a145666a401cd869a1bbcc0602cb4977155\"" Jul 6 23:29:09.319691 containerd[2001]: time="2025-07-06T23:29:09.319529983Z" level=info msg="StartContainer for \"b998a0c83b4677ea6313909ed8e40a145666a401cd869a1bbcc0602cb4977155\"" Jul 6 23:29:09.323373 containerd[2001]: time="2025-07-06T23:29:09.322500487Z" level=info msg="connecting to shim b998a0c83b4677ea6313909ed8e40a145666a401cd869a1bbcc0602cb4977155" address="unix:///run/containerd/s/d37ca7e2d5080a17691c446505f6d92b1f141783c7ea85d48b7d445f7fee466a" protocol=ttrpc version=3 Jul 6 23:29:09.341252 sshd[5878]: Accepted publickey for core from 139.178.89.65 port 56322 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:09.346088 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:09.362138 systemd-logind[1977]: New session 10 of user core. Jul 6 23:29:09.370719 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:29:09.387332 systemd[1]: Started cri-containerd-b998a0c83b4677ea6313909ed8e40a145666a401cd869a1bbcc0602cb4977155.scope - libcontainer container b998a0c83b4677ea6313909ed8e40a145666a401cd869a1bbcc0602cb4977155. Jul 6 23:29:09.480621 containerd[2001]: time="2025-07-06T23:29:09.480530852Z" level=info msg="StartContainer for \"b998a0c83b4677ea6313909ed8e40a145666a401cd869a1bbcc0602cb4977155\" returns successfully" Jul 6 23:29:09.614007 kubelet[3305]: I0706 23:29:09.611450 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bffc59bcd-pgz5t" podStartSLOduration=37.2839638 podStartE2EDuration="45.611424021s" podCreationTimestamp="2025-07-06 23:28:24 +0000 UTC" firstStartedPulling="2025-07-06 23:29:00.94041053 +0000 UTC m=+57.308373754" lastFinishedPulling="2025-07-06 23:29:09.267870739 +0000 UTC m=+65.635833975" observedRunningTime="2025-07-06 23:29:09.608575197 +0000 UTC m=+65.976538685" watchObservedRunningTime="2025-07-06 23:29:09.611424021 +0000 UTC m=+65.979387245" Jul 6 23:29:09.640911 containerd[2001]: time="2025-07-06T23:29:09.640832733Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:09.648411 containerd[2001]: time="2025-07-06T23:29:09.648326169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:29:09.665357 containerd[2001]: time="2025-07-06T23:29:09.665273637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 396.134414ms" Jul 6 23:29:09.665357 containerd[2001]: time="2025-07-06T23:29:09.665344689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:29:09.672530 containerd[2001]: time="2025-07-06T23:29:09.669651513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:29:09.683877 containerd[2001]: time="2025-07-06T23:29:09.683798553Z" level=info msg="CreateContainer within sandbox \"0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:29:09.716208 containerd[2001]: time="2025-07-06T23:29:09.713835993Z" level=info msg="Container f3c89e03e43738f7e88cbb29098baa1c896b572256d711378fb40b6c07a40047: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:09.747575 containerd[2001]: time="2025-07-06T23:29:09.747494494Z" level=info msg="CreateContainer within sandbox \"0f7dff7986a54920f59a145a6ab59c2ed2517c2d5059885671b908587a71eb4c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3c89e03e43738f7e88cbb29098baa1c896b572256d711378fb40b6c07a40047\"" Jul 6 23:29:09.749342 containerd[2001]: time="2025-07-06T23:29:09.749252158Z" level=info msg="StartContainer for \"f3c89e03e43738f7e88cbb29098baa1c896b572256d711378fb40b6c07a40047\"" Jul 6 23:29:09.752598 containerd[2001]: time="2025-07-06T23:29:09.752522914Z" level=info msg="connecting to shim f3c89e03e43738f7e88cbb29098baa1c896b572256d711378fb40b6c07a40047" address="unix:///run/containerd/s/74249dd8dd86b791e6c6fb7add228e2618366e9a54d533cfd1a3afa750623a12" protocol=ttrpc version=3 Jul 6 23:29:09.754344 sshd[5895]: Connection closed by 139.178.89.65 port 56322 Jul 6 23:29:09.757674 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:09.778777 systemd[1]: sshd@9-172.31.19.251:22-139.178.89.65:56322.service: Deactivated successfully. Jul 6 23:29:09.787803 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:29:09.814725 systemd-logind[1977]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:29:09.822830 systemd[1]: Started sshd@10-172.31.19.251:22-139.178.89.65:49572.service - OpenSSH per-connection server daemon (139.178.89.65:49572). Jul 6 23:29:09.837139 systemd-logind[1977]: Removed session 10. Jul 6 23:29:09.866671 systemd[1]: Started cri-containerd-f3c89e03e43738f7e88cbb29098baa1c896b572256d711378fb40b6c07a40047.scope - libcontainer container f3c89e03e43738f7e88cbb29098baa1c896b572256d711378fb40b6c07a40047. Jul 6 23:29:10.054741 containerd[2001]: time="2025-07-06T23:29:10.054680767Z" level=info msg="StartContainer for \"f3c89e03e43738f7e88cbb29098baa1c896b572256d711378fb40b6c07a40047\" returns successfully" Jul 6 23:29:10.078578 sshd[5940]: Accepted publickey for core from 139.178.89.65 port 49572 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:10.083770 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:10.101607 systemd-logind[1977]: New session 11 of user core. Jul 6 23:29:10.110837 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:29:10.553226 sshd[5968]: Connection closed by 139.178.89.65 port 49572 Jul 6 23:29:10.552540 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:10.565637 systemd[1]: sshd@10-172.31.19.251:22-139.178.89.65:49572.service: Deactivated successfully. Jul 6 23:29:10.579394 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:29:10.590093 kubelet[3305]: I0706 23:29:10.588154 3305 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:10.589305 systemd-logind[1977]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:29:10.616494 systemd[1]: Started sshd@11-172.31.19.251:22-139.178.89.65:49574.service - OpenSSH per-connection server daemon (139.178.89.65:49574). Jul 6 23:29:10.619437 systemd-logind[1977]: Removed session 11. Jul 6 23:29:10.644399 kubelet[3305]: I0706 23:29:10.643960 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bffc59bcd-cbzd2" podStartSLOduration=38.066308916 podStartE2EDuration="46.643936234s" podCreationTimestamp="2025-07-06 23:28:24 +0000 UTC" firstStartedPulling="2025-07-06 23:29:01.091069559 +0000 UTC m=+57.459032783" lastFinishedPulling="2025-07-06 23:29:09.668696865 +0000 UTC m=+66.036660101" observedRunningTime="2025-07-06 23:29:10.641103046 +0000 UTC m=+67.009066306" watchObservedRunningTime="2025-07-06 23:29:10.643936234 +0000 UTC m=+67.011899470" Jul 6 23:29:10.884698 sshd[5985]: Accepted publickey for core from 139.178.89.65 port 49574 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:10.897827 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:10.921395 systemd-logind[1977]: New session 12 of user core. Jul 6 23:29:10.934922 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:29:11.409019 sshd[5989]: Connection closed by 139.178.89.65 port 49574 Jul 6 23:29:11.409592 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:11.434903 systemd[1]: sshd@11-172.31.19.251:22-139.178.89.65:49574.service: Deactivated successfully. Jul 6 23:29:11.445927 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:29:11.450910 systemd-logind[1977]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:29:11.458933 systemd-logind[1977]: Removed session 12. Jul 6 23:29:12.601090 kubelet[3305]: I0706 23:29:12.600683 3305 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:12.925557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312815693.mount: Deactivated successfully. Jul 6 23:29:14.467559 containerd[2001]: time="2025-07-06T23:29:14.467484505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:14.471777 containerd[2001]: time="2025-07-06T23:29:14.471695797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:29:14.475708 containerd[2001]: time="2025-07-06T23:29:14.475355365Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:14.485542 containerd[2001]: time="2025-07-06T23:29:14.485477005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 4.814658708s" Jul 6 23:29:14.485542 containerd[2001]: time="2025-07-06T23:29:14.485543677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:29:14.486355 containerd[2001]: time="2025-07-06T23:29:14.486286765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:14.491882 containerd[2001]: time="2025-07-06T23:29:14.491821045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:29:14.499396 containerd[2001]: time="2025-07-06T23:29:14.499252189Z" level=info msg="CreateContainer within sandbox \"f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:29:14.519866 containerd[2001]: time="2025-07-06T23:29:14.519616489Z" level=info msg="Container c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:14.562710 containerd[2001]: time="2025-07-06T23:29:14.562607234Z" level=info msg="CreateContainer within sandbox \"f3ff9c70044e1957c22886cfceb10fddbea5666b2ce161553c22449608f1cfec\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\"" Jul 6 23:29:14.565210 containerd[2001]: time="2025-07-06T23:29:14.564992630Z" level=info msg="StartContainer for \"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\"" Jul 6 23:29:14.570298 containerd[2001]: time="2025-07-06T23:29:14.569727734Z" level=info msg="connecting to shim c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778" address="unix:///run/containerd/s/4f0538eff55bfb23ffb89b457771a6ca05c6a11219c7fbbd5e9e2901cf9612e7" protocol=ttrpc version=3 Jul 6 23:29:14.645815 systemd[1]: Started cri-containerd-c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778.scope - libcontainer container c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778. Jul 6 23:29:14.838112 containerd[2001]: time="2025-07-06T23:29:14.837594207Z" level=info msg="StartContainer for \"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\" returns successfully" Jul 6 23:29:16.018291 containerd[2001]: time="2025-07-06T23:29:16.018233653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\" id:\"ccc494859b770836172da089b896f484ab06732a0cae46cf80707a48c5d5b483\" pid:6076 exited_at:{seconds:1751844556 nanos:16095577}" Jul 6 23:29:16.065534 kubelet[3305]: I0706 23:29:16.065323 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-dxdt2" podStartSLOduration=28.843385611 podStartE2EDuration="42.064800001s" podCreationTimestamp="2025-07-06 23:28:34 +0000 UTC" firstStartedPulling="2025-07-06 23:29:01.269439815 +0000 UTC m=+57.637403051" lastFinishedPulling="2025-07-06 23:29:14.490854205 +0000 UTC m=+70.858817441" observedRunningTime="2025-07-06 23:29:15.728136603 +0000 UTC m=+72.096099863" watchObservedRunningTime="2025-07-06 23:29:16.064800001 +0000 UTC m=+72.432763237" Jul 6 23:29:16.316136 containerd[2001]: time="2025-07-06T23:29:16.315984050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:16.320416 containerd[2001]: time="2025-07-06T23:29:16.320329022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:29:16.327294 containerd[2001]: time="2025-07-06T23:29:16.326984102Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:16.334707 containerd[2001]: time="2025-07-06T23:29:16.334534718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:16.339491 containerd[2001]: time="2025-07-06T23:29:16.339009878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.847121849s" Jul 6 23:29:16.339491 containerd[2001]: time="2025-07-06T23:29:16.339340946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:29:16.361920 containerd[2001]: time="2025-07-06T23:29:16.361850786Z" level=info msg="CreateContainer within sandbox \"11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:29:16.387223 containerd[2001]: time="2025-07-06T23:29:16.386741811Z" level=info msg="Container 0184928817ac7c2d29ce039bb589ae82e9d329d07ad495716d6ee03e8302abfe: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:16.402832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770612136.mount: Deactivated successfully. Jul 6 23:29:16.415122 containerd[2001]: time="2025-07-06T23:29:16.415052535Z" level=info msg="CreateContainer within sandbox \"11cba9fd775ea8947164553c079476f77fd8fb891c6fe33eb1512b7bf8823d02\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0184928817ac7c2d29ce039bb589ae82e9d329d07ad495716d6ee03e8302abfe\"" Jul 6 23:29:16.416715 containerd[2001]: time="2025-07-06T23:29:16.416569407Z" level=info msg="StartContainer for \"0184928817ac7c2d29ce039bb589ae82e9d329d07ad495716d6ee03e8302abfe\"" Jul 6 23:29:16.422509 containerd[2001]: time="2025-07-06T23:29:16.422161731Z" level=info msg="connecting to shim 0184928817ac7c2d29ce039bb589ae82e9d329d07ad495716d6ee03e8302abfe" address="unix:///run/containerd/s/0f71ceccab82092bdcdb7c7c24f46bccc68d515ea45c21a3c5f34ecb7a264027" protocol=ttrpc version=3 Jul 6 23:29:16.470759 systemd[1]: Started sshd@12-172.31.19.251:22-139.178.89.65:49580.service - OpenSSH per-connection server daemon (139.178.89.65:49580). Jul 6 23:29:16.490111 systemd[1]: Started cri-containerd-0184928817ac7c2d29ce039bb589ae82e9d329d07ad495716d6ee03e8302abfe.scope - libcontainer container 0184928817ac7c2d29ce039bb589ae82e9d329d07ad495716d6ee03e8302abfe. Jul 6 23:29:16.599452 containerd[2001]: time="2025-07-06T23:29:16.598100956Z" level=info msg="StartContainer for \"0184928817ac7c2d29ce039bb589ae82e9d329d07ad495716d6ee03e8302abfe\" returns successfully" Jul 6 23:29:16.693291 kubelet[3305]: I0706 23:29:16.692984 3305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5nmvf" podStartSLOduration=26.507396167 podStartE2EDuration="43.692956432s" podCreationTimestamp="2025-07-06 23:28:33 +0000 UTC" firstStartedPulling="2025-07-06 23:28:59.158283345 +0000 UTC m=+55.526246569" lastFinishedPulling="2025-07-06 23:29:16.343843598 +0000 UTC m=+72.711806834" observedRunningTime="2025-07-06 23:29:16.690264748 +0000 UTC m=+73.058228020" watchObservedRunningTime="2025-07-06 23:29:16.692956432 +0000 UTC m=+73.060919656" Jul 6 23:29:16.704357 sshd[6106]: Accepted publickey for core from 139.178.89.65 port 49580 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:16.710753 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:16.719999 systemd-logind[1977]: New session 13 of user core. Jul 6 23:29:16.726442 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:29:16.999296 sshd[6129]: Connection closed by 139.178.89.65 port 49580 Jul 6 23:29:17.000331 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:17.008004 systemd[1]: sshd@12-172.31.19.251:22-139.178.89.65:49580.service: Deactivated successfully. Jul 6 23:29:17.013298 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:29:17.016554 systemd-logind[1977]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:29:17.020427 systemd-logind[1977]: Removed session 13. Jul 6 23:29:17.158556 kubelet[3305]: I0706 23:29:17.158499 3305 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:29:17.159642 kubelet[3305]: I0706 23:29:17.158571 3305 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:29:19.320367 kubelet[3305]: I0706 23:29:19.320156 3305 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:29:22.040605 systemd[1]: Started sshd@13-172.31.19.251:22-139.178.89.65:37836.service - OpenSSH per-connection server daemon (139.178.89.65:37836). Jul 6 23:29:22.244257 sshd[6144]: Accepted publickey for core from 139.178.89.65 port 37836 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:22.248730 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:22.267547 systemd-logind[1977]: New session 14 of user core. Jul 6 23:29:22.274493 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:29:22.544488 sshd[6146]: Connection closed by 139.178.89.65 port 37836 Jul 6 23:29:22.545590 sshd-session[6144]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:22.555088 systemd-logind[1977]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:29:22.555372 systemd[1]: sshd@13-172.31.19.251:22-139.178.89.65:37836.service: Deactivated successfully. Jul 6 23:29:22.560479 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:29:22.563762 systemd-logind[1977]: Removed session 14. Jul 6 23:29:24.571053 containerd[2001]: time="2025-07-06T23:29:24.570875843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\" id:\"4f80da830f616c661ebd75135a344f2257299dfe4a19483ca7dd47db641f0375\" pid:6176 exited_at:{seconds:1751844564 nanos:569272595}" Jul 6 23:29:27.598312 systemd[1]: Started sshd@14-172.31.19.251:22-139.178.89.65:37840.service - OpenSSH per-connection server daemon (139.178.89.65:37840). Jul 6 23:29:27.813314 sshd[6189]: Accepted publickey for core from 139.178.89.65 port 37840 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:27.817811 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:27.829998 systemd-logind[1977]: New session 15 of user core. Jul 6 23:29:27.836498 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:29:28.106600 sshd[6192]: Connection closed by 139.178.89.65 port 37840 Jul 6 23:29:28.107625 sshd-session[6189]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:28.115714 systemd[1]: sshd@14-172.31.19.251:22-139.178.89.65:37840.service: Deactivated successfully. Jul 6 23:29:28.119973 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:29:28.123016 systemd-logind[1977]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:29:28.126429 systemd-logind[1977]: Removed session 15. Jul 6 23:29:33.150628 systemd[1]: Started sshd@15-172.31.19.251:22-139.178.89.65:38218.service - OpenSSH per-connection server daemon (139.178.89.65:38218). Jul 6 23:29:33.350225 sshd[6206]: Accepted publickey for core from 139.178.89.65 port 38218 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:33.353094 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:33.363303 systemd-logind[1977]: New session 16 of user core. Jul 6 23:29:33.369437 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:29:33.618678 sshd[6208]: Connection closed by 139.178.89.65 port 38218 Jul 6 23:29:33.619655 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:33.626851 systemd[1]: sshd@15-172.31.19.251:22-139.178.89.65:38218.service: Deactivated successfully. Jul 6 23:29:33.630394 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:29:33.633095 systemd-logind[1977]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:29:33.637469 systemd-logind[1977]: Removed session 16. Jul 6 23:29:33.657750 systemd[1]: Started sshd@16-172.31.19.251:22-139.178.89.65:38234.service - OpenSSH per-connection server daemon (139.178.89.65:38234). Jul 6 23:29:33.858469 sshd[6220]: Accepted publickey for core from 139.178.89.65 port 38234 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:33.861680 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:33.871303 systemd-logind[1977]: New session 17 of user core. Jul 6 23:29:33.878498 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:29:34.591299 sshd[6222]: Connection closed by 139.178.89.65 port 38234 Jul 6 23:29:34.592740 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:34.602539 systemd-logind[1977]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:29:34.603414 systemd[1]: sshd@16-172.31.19.251:22-139.178.89.65:38234.service: Deactivated successfully. Jul 6 23:29:34.611129 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:29:34.652899 systemd-logind[1977]: Removed session 17. Jul 6 23:29:34.657812 systemd[1]: Started sshd@17-172.31.19.251:22-139.178.89.65:38248.service - OpenSSH per-connection server daemon (139.178.89.65:38248). Jul 6 23:29:34.922618 sshd[6232]: Accepted publickey for core from 139.178.89.65 port 38248 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:34.926087 sshd-session[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:34.939018 systemd-logind[1977]: New session 18 of user core. Jul 6 23:29:34.948523 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:29:36.732328 containerd[2001]: time="2025-07-06T23:29:36.732115632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\" id:\"a35a27ed9993268202609ee00cb132799cea4a90329ddfa81183511ad58d1656\" pid:6257 exited_at:{seconds:1751844576 nanos:731758548}" Jul 6 23:29:36.884354 sshd[6234]: Connection closed by 139.178.89.65 port 38248 Jul 6 23:29:36.885625 sshd-session[6232]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:36.898989 systemd[1]: sshd@17-172.31.19.251:22-139.178.89.65:38248.service: Deactivated successfully. Jul 6 23:29:36.906800 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:29:36.910335 systemd-logind[1977]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:29:36.940386 systemd[1]: Started sshd@18-172.31.19.251:22-139.178.89.65:38256.service - OpenSSH per-connection server daemon (139.178.89.65:38256). Jul 6 23:29:36.943338 systemd-logind[1977]: Removed session 18. Jul 6 23:29:37.159440 sshd[6271]: Accepted publickey for core from 139.178.89.65 port 38256 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:37.167931 sshd-session[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:37.183660 systemd-logind[1977]: New session 19 of user core. Jul 6 23:29:37.193542 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:29:37.824803 sshd[6276]: Connection closed by 139.178.89.65 port 38256 Jul 6 23:29:37.825345 sshd-session[6271]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:37.837531 systemd[1]: sshd@18-172.31.19.251:22-139.178.89.65:38256.service: Deactivated successfully. Jul 6 23:29:37.847260 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:29:37.853560 systemd-logind[1977]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:29:37.874226 systemd[1]: Started sshd@19-172.31.19.251:22-139.178.89.65:38264.service - OpenSSH per-connection server daemon (139.178.89.65:38264). Jul 6 23:29:37.879528 systemd-logind[1977]: Removed session 19. Jul 6 23:29:38.076974 sshd[6286]: Accepted publickey for core from 139.178.89.65 port 38264 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:38.080012 sshd-session[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:38.088874 systemd-logind[1977]: New session 20 of user core. Jul 6 23:29:38.098764 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:29:38.362925 sshd[6288]: Connection closed by 139.178.89.65 port 38264 Jul 6 23:29:38.364104 sshd-session[6286]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:38.375860 systemd[1]: sshd@19-172.31.19.251:22-139.178.89.65:38264.service: Deactivated successfully. Jul 6 23:29:38.385667 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:29:38.391471 systemd-logind[1977]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:29:38.395292 systemd-logind[1977]: Removed session 20. Jul 6 23:29:43.403267 systemd[1]: Started sshd@20-172.31.19.251:22-139.178.89.65:47326.service - OpenSSH per-connection server daemon (139.178.89.65:47326). Jul 6 23:29:43.604811 sshd[6305]: Accepted publickey for core from 139.178.89.65 port 47326 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:43.607091 sshd-session[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:43.615852 systemd-logind[1977]: New session 21 of user core. Jul 6 23:29:43.629496 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:29:43.883397 sshd[6314]: Connection closed by 139.178.89.65 port 47326 Jul 6 23:29:43.884791 sshd-session[6305]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:43.891904 systemd-logind[1977]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:29:43.893843 systemd[1]: sshd@20-172.31.19.251:22-139.178.89.65:47326.service: Deactivated successfully. Jul 6 23:29:43.900787 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:29:43.905567 systemd-logind[1977]: Removed session 21. Jul 6 23:29:45.810952 containerd[2001]: time="2025-07-06T23:29:45.810855873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\" id:\"ced23b343343c1c4428a0fd33c7ad37025fd76a8d9c289129c6310335ac3b041\" pid:6340 exited_at:{seconds:1751844585 nanos:810467145}" Jul 6 23:29:48.929936 systemd[1]: Started sshd@21-172.31.19.251:22-139.178.89.65:47336.service - OpenSSH per-connection server daemon (139.178.89.65:47336). Jul 6 23:29:49.153542 sshd[6351]: Accepted publickey for core from 139.178.89.65 port 47336 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:49.157357 sshd-session[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:49.170586 systemd-logind[1977]: New session 22 of user core. Jul 6 23:29:49.176543 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:29:49.481467 sshd[6353]: Connection closed by 139.178.89.65 port 47336 Jul 6 23:29:49.484551 sshd-session[6351]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:49.491915 systemd[1]: sshd@21-172.31.19.251:22-139.178.89.65:47336.service: Deactivated successfully. Jul 6 23:29:49.499399 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:29:49.505585 systemd-logind[1977]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:29:49.510165 systemd-logind[1977]: Removed session 22. Jul 6 23:29:50.571378 containerd[2001]: time="2025-07-06T23:29:50.571309176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\" id:\"f17673afd8e86231f2d17fc8af9c78ccef1d9626a73772588a5a1698f0bc6105\" pid:6376 exited_at:{seconds:1751844590 nanos:570853692}" Jul 6 23:29:53.267516 containerd[2001]: time="2025-07-06T23:29:53.267453878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\" id:\"5689b371488f3332f941db198d52deabd05f61ca759b9105bbd40ab34f398590\" pid:6398 exited_at:{seconds:1751844593 nanos:266915786}" Jul 6 23:29:54.524695 systemd[1]: Started sshd@22-172.31.19.251:22-139.178.89.65:46656.service - OpenSSH per-connection server daemon (139.178.89.65:46656). Jul 6 23:29:54.658034 containerd[2001]: time="2025-07-06T23:29:54.657964445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\" id:\"fa07c1e6da7326434e7b9686ef4ab3b03803bb4760b65ad04d9c23de4465b478\" pid:6420 exited_at:{seconds:1751844594 nanos:656588393}" Jul 6 23:29:54.734477 sshd[6431]: Accepted publickey for core from 139.178.89.65 port 46656 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:29:54.739770 sshd-session[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:54.751550 systemd-logind[1977]: New session 23 of user core. Jul 6 23:29:54.760594 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:29:55.100255 sshd[6434]: Connection closed by 139.178.89.65 port 46656 Jul 6 23:29:55.100094 sshd-session[6431]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:55.111157 systemd[1]: sshd@22-172.31.19.251:22-139.178.89.65:46656.service: Deactivated successfully. Jul 6 23:29:55.118124 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:29:55.120838 systemd-logind[1977]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:29:55.126285 systemd-logind[1977]: Removed session 23. Jul 6 23:30:00.139370 systemd[1]: Started sshd@23-172.31.19.251:22-139.178.89.65:48552.service - OpenSSH per-connection server daemon (139.178.89.65:48552). Jul 6 23:30:00.339166 sshd[6447]: Accepted publickey for core from 139.178.89.65 port 48552 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:00.342049 sshd-session[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:00.357313 systemd-logind[1977]: New session 24 of user core. Jul 6 23:30:00.364302 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:30:00.656093 sshd[6449]: Connection closed by 139.178.89.65 port 48552 Jul 6 23:30:00.657520 sshd-session[6447]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:00.666974 systemd[1]: sshd@23-172.31.19.251:22-139.178.89.65:48552.service: Deactivated successfully. Jul 6 23:30:00.673742 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:30:00.678558 systemd-logind[1977]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:30:00.682702 systemd-logind[1977]: Removed session 24. Jul 6 23:30:05.698748 systemd[1]: Started sshd@24-172.31.19.251:22-139.178.89.65:48566.service - OpenSSH per-connection server daemon (139.178.89.65:48566). Jul 6 23:30:05.910700 sshd[6463]: Accepted publickey for core from 139.178.89.65 port 48566 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:05.913654 sshd-session[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:05.923298 systemd-logind[1977]: New session 25 of user core. Jul 6 23:30:05.931927 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:30:06.262401 sshd[6465]: Connection closed by 139.178.89.65 port 48566 Jul 6 23:30:06.264247 sshd-session[6463]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:06.276354 systemd[1]: sshd@24-172.31.19.251:22-139.178.89.65:48566.service: Deactivated successfully. Jul 6 23:30:06.280662 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:30:06.289357 systemd-logind[1977]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:30:06.297608 systemd-logind[1977]: Removed session 25. Jul 6 23:30:06.668116 containerd[2001]: time="2025-07-06T23:30:06.668040508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\" id:\"241e0896346795cc025ed6fa1e278d9ed7712cb4bd16acc14595d6656c2d35da\" pid:6488 exited_at:{seconds:1751844606 nanos:667094320}" Jul 6 23:30:11.307369 systemd[1]: Started sshd@25-172.31.19.251:22-139.178.89.65:41828.service - OpenSSH per-connection server daemon (139.178.89.65:41828). Jul 6 23:30:11.517250 sshd[6500]: Accepted publickey for core from 139.178.89.65 port 41828 ssh2: RSA SHA256:XIfYldZnofzYHiYUR3iIM5uml3xcST4usAlhecAY7Vw Jul 6 23:30:11.521617 sshd-session[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:11.532848 systemd-logind[1977]: New session 26 of user core. Jul 6 23:30:11.539520 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:30:11.818229 sshd[6502]: Connection closed by 139.178.89.65 port 41828 Jul 6 23:30:11.819322 sshd-session[6500]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:11.829226 systemd[1]: sshd@25-172.31.19.251:22-139.178.89.65:41828.service: Deactivated successfully. Jul 6 23:30:11.837058 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:30:11.841988 systemd-logind[1977]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:30:11.845832 systemd-logind[1977]: Removed session 26. Jul 6 23:30:15.786808 containerd[2001]: time="2025-07-06T23:30:15.786682502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\" id:\"1205713feca14828172483f999acabc0a090bbab81724c8d12d02e25c004934b\" pid:6525 exited_at:{seconds:1751844615 nanos:785707910}" Jul 6 23:30:24.508376 containerd[2001]: time="2025-07-06T23:30:24.508092885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f6f9f576c1808b24f43f5d3a2a0a18b541aaf121424f0d4937d34a134294a23\" id:\"c13477d831a0ef48518e5ab1886ab5a960a509c6da617bed645b4dd8aec8c8a4\" pid:6555 exited_at:{seconds:1751844624 nanos:507203049}" Jul 6 23:30:26.030363 systemd[1]: cri-containerd-9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e.scope: Deactivated successfully. Jul 6 23:30:26.031664 systemd[1]: cri-containerd-9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e.scope: Consumed 7.268s CPU time, 62.1M memory peak, 64K read from disk. Jul 6 23:30:26.046073 containerd[2001]: time="2025-07-06T23:30:26.045664497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e\" id:\"9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e\" pid:3136 exit_status:1 exited_at:{seconds:1751844626 nanos:43379001}" Jul 6 23:30:26.079908 containerd[2001]: time="2025-07-06T23:30:26.079792245Z" level=info msg="received exit event container_id:\"9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e\" id:\"9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e\" pid:3136 exit_status:1 exited_at:{seconds:1751844626 nanos:43379001}" Jul 6 23:30:26.134551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e-rootfs.mount: Deactivated successfully. Jul 6 23:30:26.298672 systemd[1]: cri-containerd-bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98.scope: Deactivated successfully. Jul 6 23:30:26.300342 systemd[1]: cri-containerd-bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98.scope: Consumed 24.360s CPU time, 110.8M memory peak, 624K read from disk. Jul 6 23:30:26.307638 containerd[2001]: time="2025-07-06T23:30:26.307559686Z" level=info msg="received exit event container_id:\"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\" id:\"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\" pid:3900 exit_status:1 exited_at:{seconds:1751844626 nanos:306805138}" Jul 6 23:30:26.308782 containerd[2001]: time="2025-07-06T23:30:26.308669338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\" id:\"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\" pid:3900 exit_status:1 exited_at:{seconds:1751844626 nanos:306805138}" Jul 6 23:30:26.366134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98-rootfs.mount: Deactivated successfully. Jul 6 23:30:26.491349 kubelet[3305]: E0706 23:30:26.491258 3305 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-251?timeout=10s\": context deadline exceeded" Jul 6 23:30:26.955211 kubelet[3305]: I0706 23:30:26.954898 3305 scope.go:117] "RemoveContainer" containerID="9bc7a67fea3075a2a39d6c200beeae4276e801306d941372cf52ac15e30d316e" Jul 6 23:30:26.960265 containerd[2001]: time="2025-07-06T23:30:26.960083401Z" level=info msg="CreateContainer within sandbox \"aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:30:26.962476 kubelet[3305]: I0706 23:30:26.962328 3305 scope.go:117] "RemoveContainer" containerID="bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98" Jul 6 23:30:26.968144 containerd[2001]: time="2025-07-06T23:30:26.968049337Z" level=info msg="CreateContainer within sandbox \"d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 6 23:30:26.985504 containerd[2001]: time="2025-07-06T23:30:26.985440385Z" level=info msg="Container 943883057bc5b374ce8e3ceb111905d8dc288701a356c1d79e83227c951e85b4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:27.001090 containerd[2001]: time="2025-07-06T23:30:27.001014345Z" level=info msg="Container ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:27.017775 containerd[2001]: time="2025-07-06T23:30:27.017556933Z" level=info msg="CreateContainer within sandbox \"aa6a5a5231a5273412901aac3ec8fe9056445aed4cc3cdcf43beb636780711b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"943883057bc5b374ce8e3ceb111905d8dc288701a356c1d79e83227c951e85b4\"" Jul 6 23:30:27.019214 containerd[2001]: time="2025-07-06T23:30:27.018963513Z" level=info msg="StartContainer for \"943883057bc5b374ce8e3ceb111905d8dc288701a356c1d79e83227c951e85b4\"" Jul 6 23:30:27.025542 containerd[2001]: time="2025-07-06T23:30:27.025427541Z" level=info msg="connecting to shim 943883057bc5b374ce8e3ceb111905d8dc288701a356c1d79e83227c951e85b4" address="unix:///run/containerd/s/1cb451ca3542fec1e4e0f5d6b4783bbad4ab61180c076621172584b1dbf5e7dc" protocol=ttrpc version=3 Jul 6 23:30:27.025843 containerd[2001]: time="2025-07-06T23:30:27.025798101Z" level=info msg="CreateContainer within sandbox \"d04b302eb0e88daefa7986954d4a0ab7777e3ded5d4a5026cc64d62b9cf2019c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87\"" Jul 6 23:30:27.026980 containerd[2001]: time="2025-07-06T23:30:27.026911821Z" level=info msg="StartContainer for \"ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87\"" Jul 6 23:30:27.030915 containerd[2001]: time="2025-07-06T23:30:27.030769593Z" level=info msg="connecting to shim ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87" address="unix:///run/containerd/s/4deb1c5ea844bcd73b15fd307afd95dcee210a619eb16d0cde4db5da3c59a5fd" protocol=ttrpc version=3 Jul 6 23:30:27.067539 systemd[1]: Started cri-containerd-943883057bc5b374ce8e3ceb111905d8dc288701a356c1d79e83227c951e85b4.scope - libcontainer container 943883057bc5b374ce8e3ceb111905d8dc288701a356c1d79e83227c951e85b4. Jul 6 23:30:27.085866 systemd[1]: Started cri-containerd-ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87.scope - libcontainer container ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87. Jul 6 23:30:27.143227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409025843.mount: Deactivated successfully. Jul 6 23:30:27.200924 containerd[2001]: time="2025-07-06T23:30:27.200667826Z" level=info msg="StartContainer for \"ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87\" returns successfully" Jul 6 23:30:27.255763 containerd[2001]: time="2025-07-06T23:30:27.255385031Z" level=info msg="StartContainer for \"943883057bc5b374ce8e3ceb111905d8dc288701a356c1d79e83227c951e85b4\" returns successfully" Jul 6 23:30:31.445068 systemd[1]: cri-containerd-4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f.scope: Deactivated successfully. Jul 6 23:30:31.446045 systemd[1]: cri-containerd-4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f.scope: Consumed 4.518s CPU time, 20.3M memory peak, 192K read from disk. Jul 6 23:30:31.453212 containerd[2001]: time="2025-07-06T23:30:31.453142299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f\" id:\"4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f\" pid:3157 exit_status:1 exited_at:{seconds:1751844631 nanos:452422203}" Jul 6 23:30:31.454700 containerd[2001]: time="2025-07-06T23:30:31.453253263Z" level=info msg="received exit event container_id:\"4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f\" id:\"4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f\" pid:3157 exit_status:1 exited_at:{seconds:1751844631 nanos:452422203}" Jul 6 23:30:31.501820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f-rootfs.mount: Deactivated successfully. Jul 6 23:30:31.997659 kubelet[3305]: I0706 23:30:31.997600 3305 scope.go:117] "RemoveContainer" containerID="4b067a7cb1487b2e02ff248c042242e572dc6d492d25bf06cd8cef833da40e6f" Jul 6 23:30:32.002029 containerd[2001]: time="2025-07-06T23:30:32.001918334Z" level=info msg="CreateContainer within sandbox \"e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:30:32.046212 containerd[2001]: time="2025-07-06T23:30:32.042817370Z" level=info msg="Container 8b73a1b40c5ca46988a4e2a900c4a3a42bb4424e91d9804650d97364faad8c73: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:32.047653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766345211.mount: Deactivated successfully. Jul 6 23:30:32.064088 containerd[2001]: time="2025-07-06T23:30:32.063910334Z" level=info msg="CreateContainer within sandbox \"e2ce84c9985408d7522ec3ed0d28611d968217f0fef71370f60f1a9960288ea6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8b73a1b40c5ca46988a4e2a900c4a3a42bb4424e91d9804650d97364faad8c73\"" Jul 6 23:30:32.064915 containerd[2001]: time="2025-07-06T23:30:32.064872386Z" level=info msg="StartContainer for \"8b73a1b40c5ca46988a4e2a900c4a3a42bb4424e91d9804650d97364faad8c73\"" Jul 6 23:30:32.067362 containerd[2001]: time="2025-07-06T23:30:32.067290134Z" level=info msg="connecting to shim 8b73a1b40c5ca46988a4e2a900c4a3a42bb4424e91d9804650d97364faad8c73" address="unix:///run/containerd/s/36d92f1eb715be9b29306179fa7f553a7d01f4e8171efaf2e2ea070789869f5f" protocol=ttrpc version=3 Jul 6 23:30:32.109533 systemd[1]: Started cri-containerd-8b73a1b40c5ca46988a4e2a900c4a3a42bb4424e91d9804650d97364faad8c73.scope - libcontainer container 8b73a1b40c5ca46988a4e2a900c4a3a42bb4424e91d9804650d97364faad8c73. Jul 6 23:30:32.195332 containerd[2001]: time="2025-07-06T23:30:32.195249807Z" level=info msg="StartContainer for \"8b73a1b40c5ca46988a4e2a900c4a3a42bb4424e91d9804650d97364faad8c73\" returns successfully" Jul 6 23:30:36.491650 kubelet[3305]: E0706 23:30:36.491577 3305 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-251?timeout=10s\": context deadline exceeded" Jul 6 23:30:36.606328 containerd[2001]: time="2025-07-06T23:30:36.606261549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b15475b9a1b996ee7746740e5ef74280a723c4004bee6e9ae184cd62cda8799e\" id:\"b6a5722eecd567ae34f232858c4cd7c69aeae5a76879f709bb7586136d899568\" pid:6732 exit_status:1 exited_at:{seconds:1751844636 nanos:605845413}" Jul 6 23:30:38.725777 systemd[1]: cri-containerd-ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87.scope: Deactivated successfully. Jul 6 23:30:38.726848 containerd[2001]: time="2025-07-06T23:30:38.726779196Z" level=info msg="received exit event container_id:\"ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87\" id:\"ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87\" pid:6626 exit_status:1 exited_at:{seconds:1751844638 nanos:726000312}" Jul 6 23:30:38.728454 containerd[2001]: time="2025-07-06T23:30:38.726888348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87\" id:\"ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87\" pid:6626 exit_status:1 exited_at:{seconds:1751844638 nanos:726000312}" Jul 6 23:30:38.727052 systemd[1]: cri-containerd-ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87.scope: Consumed 481ms CPU time, 34.9M memory peak, 1M read from disk. Jul 6 23:30:38.770450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87-rootfs.mount: Deactivated successfully. Jul 6 23:30:39.031607 kubelet[3305]: I0706 23:30:39.031450 3305 scope.go:117] "RemoveContainer" containerID="bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98" Jul 6 23:30:39.035114 kubelet[3305]: I0706 23:30:39.034752 3305 scope.go:117] "RemoveContainer" containerID="ad7a830fc5bc1a24dd4144b4545a1e25bc746234e20066db3bf499338964ba87" Jul 6 23:30:39.035114 kubelet[3305]: E0706 23:30:39.035010 3305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-dhxm6_tigera-operator(c9b8df54-96c8-48f9-a253-6c915fba4af6)\"" pod="tigera-operator/tigera-operator-747864d56d-dhxm6" podUID="c9b8df54-96c8-48f9-a253-6c915fba4af6" Jul 6 23:30:39.036799 containerd[2001]: time="2025-07-06T23:30:39.036686541Z" level=info msg="RemoveContainer for \"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\"" Jul 6 23:30:39.046156 containerd[2001]: time="2025-07-06T23:30:39.046078413Z" level=info msg="RemoveContainer for \"bc210ed4ec24f85701d732f3a71723ebd4fa9891f6b4fc1a2be650eaba8ccb98\" returns successfully" Jul 6 23:30:45.806393 containerd[2001]: time="2025-07-06T23:30:45.806153095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c219658775266c6e91e59d03225a2eccc753e58d0af1e15dd12f5448059bc778\" id:\"0c5c06dd0626cdd2008a46083548bb9bcd1587bb2c9146e492dff82119d23947\" pid:6775 exited_at:{seconds:1751844645 nanos:805142935}" Jul 6 23:30:46.492428 kubelet[3305]: E0706 23:30:46.492337 3305 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-251?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"