Oct 12 23:58:51.130877 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 12 23:58:51.130930 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Oct 12 22:32:01 -00 2025 Oct 12 23:58:51.130956 kernel: KASLR disabled due to lack of seed Oct 12 23:58:51.130973 kernel: efi: EFI v2.7 by EDK II Oct 12 23:58:51.130988 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Oct 12 23:58:51.131003 kernel: secureboot: Secure boot disabled Oct 12 23:58:51.131020 kernel: ACPI: Early table checksum verification disabled Oct 12 23:58:51.131035 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 12 23:58:51.131050 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 12 23:58:51.131065 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 12 23:58:51.131082 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 12 23:58:51.131101 kernel: ACPI: FACS 0x0000000078630000 000040 Oct 12 23:58:51.131116 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 12 23:58:51.131131 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 12 23:58:51.131149 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 12 23:58:51.131165 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 12 23:58:51.131205 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 12 23:58:51.131227 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 12 23:58:51.131244 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 12 23:58:51.131261 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 12 23:58:51.131278 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 12 23:58:51.131293 kernel: printk: legacy bootconsole [uart0] enabled Oct 12 23:58:51.131309 kernel: ACPI: Use ACPI SPCR as default console: No Oct 12 23:58:51.131325 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 12 23:58:51.131342 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Oct 12 23:58:51.131358 kernel: Zone ranges: Oct 12 23:58:51.131374 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 12 23:58:51.131396 kernel: DMA32 empty Oct 12 23:58:51.131412 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 12 23:58:51.131427 kernel: Device empty Oct 12 23:58:51.131443 kernel: Movable zone start for each node Oct 12 23:58:51.131459 kernel: Early memory node ranges Oct 12 23:58:51.131474 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Oct 12 23:58:51.131490 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Oct 12 23:58:51.131505 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Oct 12 23:58:51.131521 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 12 23:58:51.131536 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 12 23:58:51.131552 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 12 23:58:51.131568 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 12 23:58:51.131588 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 12 23:58:51.131610 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 12 23:58:51.131627 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 12 23:58:51.131644 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Oct 12 23:58:51.131661 kernel: psci: probing for conduit method from ACPI. Oct 12 23:58:51.131683 kernel: psci: PSCIv1.0 detected in firmware. Oct 12 23:58:51.131700 kernel: psci: Using standard PSCI v0.2 function IDs Oct 12 23:58:51.131716 kernel: psci: Trusted OS migration not required Oct 12 23:58:51.131732 kernel: psci: SMC Calling Convention v1.1 Oct 12 23:58:51.131749 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Oct 12 23:58:51.131766 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 12 23:58:51.131782 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 12 23:58:51.131799 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 12 23:58:51.131816 kernel: Detected PIPT I-cache on CPU0 Oct 12 23:58:51.131833 kernel: CPU features: detected: GIC system register CPU interface Oct 12 23:58:51.131849 kernel: CPU features: detected: Spectre-v2 Oct 12 23:58:51.131869 kernel: CPU features: detected: Spectre-v3a Oct 12 23:58:51.131886 kernel: CPU features: detected: Spectre-BHB Oct 12 23:58:51.131903 kernel: CPU features: detected: ARM erratum 1742098 Oct 12 23:58:51.131920 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 12 23:58:51.131937 kernel: alternatives: applying boot alternatives Oct 12 23:58:51.131956 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 12 23:58:51.131975 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 12 23:58:51.131993 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 12 23:58:51.132010 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 12 23:58:51.132027 kernel: Fallback order for Node 0: 0 Oct 12 23:58:51.132047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Oct 12 23:58:51.132088 kernel: Policy zone: Normal Oct 12 23:58:51.132106 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 12 23:58:51.132123 kernel: software IO TLB: area num 2. Oct 12 23:58:51.132140 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Oct 12 23:58:51.132157 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 12 23:58:51.132174 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 12 23:58:51.133242 kernel: rcu: RCU event tracing is enabled. Oct 12 23:58:51.133270 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 12 23:58:51.133287 kernel: Trampoline variant of Tasks RCU enabled. Oct 12 23:58:51.133305 kernel: Tracing variant of Tasks RCU enabled. Oct 12 23:58:51.133322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 12 23:58:51.133348 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 12 23:58:51.133366 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 12 23:58:51.133383 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 12 23:58:51.133400 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 12 23:58:51.133416 kernel: GICv3: 96 SPIs implemented Oct 12 23:58:51.133433 kernel: GICv3: 0 Extended SPIs implemented Oct 12 23:58:51.133449 kernel: Root IRQ handler: gic_handle_irq Oct 12 23:58:51.133465 kernel: GICv3: GICv3 features: 16 PPIs Oct 12 23:58:51.133482 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 12 23:58:51.133499 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 12 23:58:51.133516 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 12 23:58:51.133533 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Oct 12 23:58:51.133554 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Oct 12 23:58:51.133570 kernel: GICv3: using LPI property table @0x0000000400110000 Oct 12 23:58:51.133587 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 12 23:58:51.133604 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Oct 12 23:58:51.133620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 12 23:58:51.133637 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 12 23:58:51.133654 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 12 23:58:51.133670 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 12 23:58:51.133687 kernel: Console: colour dummy device 80x25 Oct 12 23:58:51.133704 kernel: printk: legacy console [tty1] enabled Oct 12 23:58:51.133722 kernel: ACPI: Core revision 20240827 Oct 12 23:58:51.133777 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 12 23:58:51.133800 kernel: pid_max: default: 32768 minimum: 301 Oct 12 23:58:51.133818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 12 23:58:51.133836 kernel: landlock: Up and running. Oct 12 23:58:51.133853 kernel: SELinux: Initializing. Oct 12 23:58:51.133870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 12 23:58:51.133887 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 12 23:58:51.133904 kernel: rcu: Hierarchical SRCU implementation. Oct 12 23:58:51.133921 kernel: rcu: Max phase no-delay instances is 400. Oct 12 23:58:51.133944 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 12 23:58:51.133960 kernel: Remapping and enabling EFI services. Oct 12 23:58:51.133977 kernel: smp: Bringing up secondary CPUs ... Oct 12 23:58:51.133994 kernel: Detected PIPT I-cache on CPU1 Oct 12 23:58:51.134010 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 12 23:58:51.134027 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Oct 12 23:58:51.134044 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 12 23:58:51.134062 kernel: smp: Brought up 1 node, 2 CPUs Oct 12 23:58:51.134079 kernel: SMP: Total of 2 processors activated. Oct 12 23:58:51.134109 kernel: CPU: All CPU(s) started at EL1 Oct 12 23:58:51.134127 kernel: CPU features: detected: 32-bit EL0 Support Oct 12 23:58:51.134149 kernel: CPU features: detected: 32-bit EL1 Support Oct 12 23:58:51.134167 kernel: CPU features: detected: CRC32 instructions Oct 12 23:58:51.134218 kernel: alternatives: applying system-wide alternatives Oct 12 23:58:51.134242 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Oct 12 23:58:51.134261 kernel: devtmpfs: initialized Oct 12 23:58:51.134284 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 12 23:58:51.134302 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 12 23:58:51.134320 kernel: 17040 pages in range for non-PLT usage Oct 12 23:58:51.134338 kernel: 508560 pages in range for PLT usage Oct 12 23:58:51.134355 kernel: pinctrl core: initialized pinctrl subsystem Oct 12 23:58:51.134372 kernel: SMBIOS 3.0.0 present. Oct 12 23:58:51.134390 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 12 23:58:51.134407 kernel: DMI: Memory slots populated: 0/0 Oct 12 23:58:51.134424 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 12 23:58:51.134446 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 12 23:58:51.134464 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 12 23:58:51.134481 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 12 23:58:51.134499 kernel: audit: initializing netlink subsys (disabled) Oct 12 23:58:51.134517 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Oct 12 23:58:51.134534 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 12 23:58:51.134552 kernel: cpuidle: using governor menu Oct 12 23:58:51.134569 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 12 23:58:51.134587 kernel: ASID allocator initialised with 65536 entries Oct 12 23:58:51.134608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 12 23:58:51.134626 kernel: Serial: AMBA PL011 UART driver Oct 12 23:58:51.134643 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 12 23:58:51.134661 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 12 23:58:51.134678 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 12 23:58:51.134695 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 12 23:58:51.134713 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 12 23:58:51.134730 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 12 23:58:51.134748 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 12 23:58:51.134769 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 12 23:58:51.134787 kernel: ACPI: Added _OSI(Module Device) Oct 12 23:58:51.134804 kernel: ACPI: Added _OSI(Processor Device) Oct 12 23:58:51.134822 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 12 23:58:51.134839 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 12 23:58:51.134856 kernel: ACPI: Interpreter enabled Oct 12 23:58:51.134874 kernel: ACPI: Using GIC for interrupt routing Oct 12 23:58:51.134891 kernel: ACPI: MCFG table detected, 1 entries Oct 12 23:58:51.134908 kernel: ACPI: CPU0 has been hot-added Oct 12 23:58:51.134929 kernel: ACPI: CPU1 has been hot-added Oct 12 23:58:51.134947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 12 23:58:51.135815 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 12 23:58:51.136068 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 12 23:58:51.137378 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 12 23:58:51.137619 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 12 23:58:51.137807 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 12 23:58:51.137842 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 12 23:58:51.137861 kernel: acpiphp: Slot [1] registered Oct 12 23:58:51.137880 kernel: acpiphp: Slot [2] registered Oct 12 23:58:51.137897 kernel: acpiphp: Slot [3] registered Oct 12 23:58:51.137915 kernel: acpiphp: Slot [4] registered Oct 12 23:58:51.137932 kernel: acpiphp: Slot [5] registered Oct 12 23:58:51.137950 kernel: acpiphp: Slot [6] registered Oct 12 23:58:51.137967 kernel: acpiphp: Slot [7] registered Oct 12 23:58:51.137984 kernel: acpiphp: Slot [8] registered Oct 12 23:58:51.138002 kernel: acpiphp: Slot [9] registered Oct 12 23:58:51.138024 kernel: acpiphp: Slot [10] registered Oct 12 23:58:51.138041 kernel: acpiphp: Slot [11] registered Oct 12 23:58:51.138059 kernel: acpiphp: Slot [12] registered Oct 12 23:58:51.138076 kernel: acpiphp: Slot [13] registered Oct 12 23:58:51.138094 kernel: acpiphp: Slot [14] registered Oct 12 23:58:51.138111 kernel: acpiphp: Slot [15] registered Oct 12 23:58:51.138129 kernel: acpiphp: Slot [16] registered Oct 12 23:58:51.138147 kernel: acpiphp: Slot [17] registered Oct 12 23:58:51.138166 kernel: acpiphp: Slot [18] registered Oct 12 23:58:51.138255 kernel: acpiphp: Slot [19] registered Oct 12 23:58:51.138278 kernel: acpiphp: Slot [20] registered Oct 12 23:58:51.138297 kernel: acpiphp: Slot [21] registered Oct 12 23:58:51.138315 kernel: acpiphp: Slot [22] registered Oct 12 23:58:51.138333 kernel: acpiphp: Slot [23] registered Oct 12 23:58:51.138351 kernel: acpiphp: Slot [24] registered Oct 12 23:58:51.138369 kernel: acpiphp: Slot [25] registered Oct 12 23:58:51.140818 kernel: acpiphp: Slot [26] registered Oct 12 23:58:51.140843 kernel: acpiphp: Slot [27] registered Oct 12 23:58:51.140861 kernel: acpiphp: Slot [28] registered Oct 12 23:58:51.140888 kernel: acpiphp: Slot [29] registered Oct 12 23:58:51.140907 kernel: acpiphp: Slot [30] registered Oct 12 23:58:51.140925 kernel: acpiphp: Slot [31] registered Oct 12 23:58:51.140943 kernel: PCI host bridge to bus 0000:00 Oct 12 23:58:51.141171 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 12 23:58:51.141449 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 12 23:58:51.141619 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 12 23:58:51.141786 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 12 23:58:51.142007 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Oct 12 23:58:51.142269 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Oct 12 23:58:51.142473 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Oct 12 23:58:51.142686 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Oct 12 23:58:51.142875 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Oct 12 23:58:51.143061 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 12 23:58:51.143325 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Oct 12 23:58:51.143516 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Oct 12 23:58:51.143704 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Oct 12 23:58:51.143888 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Oct 12 23:58:51.144129 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 12 23:58:51.144415 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Oct 12 23:58:51.144607 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Oct 12 23:58:51.144803 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Oct 12 23:58:51.144988 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Oct 12 23:58:51.145177 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Oct 12 23:58:51.145398 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 12 23:58:51.145564 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 12 23:58:51.145731 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 12 23:58:51.145757 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 12 23:58:51.145783 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 12 23:58:51.145802 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 12 23:58:51.145820 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 12 23:58:51.145838 kernel: iommu: Default domain type: Translated Oct 12 23:58:51.145856 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 12 23:58:51.145874 kernel: efivars: Registered efivars operations Oct 12 23:58:51.145892 kernel: vgaarb: loaded Oct 12 23:58:51.145910 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 12 23:58:51.145928 kernel: VFS: Disk quotas dquot_6.6.0 Oct 12 23:58:51.145950 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 12 23:58:51.145967 kernel: pnp: PnP ACPI init Oct 12 23:58:51.146165 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 12 23:58:51.146219 kernel: pnp: PnP ACPI: found 1 devices Oct 12 23:58:51.146240 kernel: NET: Registered PF_INET protocol family Oct 12 23:58:51.146259 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 12 23:58:51.146278 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 12 23:58:51.148844 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 12 23:58:51.148876 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 12 23:58:51.148895 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 12 23:58:51.148913 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 12 23:58:51.148931 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 12 23:58:51.148949 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 12 23:58:51.148967 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 12 23:58:51.148985 kernel: PCI: CLS 0 bytes, default 64 Oct 12 23:58:51.149002 kernel: kvm [1]: HYP mode not available Oct 12 23:58:51.149020 kernel: Initialise system trusted keyrings Oct 12 23:58:51.149043 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 12 23:58:51.149061 kernel: Key type asymmetric registered Oct 12 23:58:51.149078 kernel: Asymmetric key parser 'x509' registered Oct 12 23:58:51.149096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 12 23:58:51.149114 kernel: io scheduler mq-deadline registered Oct 12 23:58:51.149131 kernel: io scheduler kyber registered Oct 12 23:58:51.149149 kernel: io scheduler bfq registered Oct 12 23:58:51.149443 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 12 23:58:51.149479 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 12 23:58:51.149498 kernel: ACPI: button: Power Button [PWRB] Oct 12 23:58:51.149515 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Oct 12 23:58:51.149533 kernel: ACPI: button: Sleep Button [SLPB] Oct 12 23:58:51.149550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 12 23:58:51.149569 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 12 23:58:51.149758 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 12 23:58:51.149784 kernel: printk: legacy console [ttyS0] disabled Oct 12 23:58:51.149802 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 12 23:58:51.149825 kernel: printk: legacy console [ttyS0] enabled Oct 12 23:58:51.149843 kernel: printk: legacy bootconsole [uart0] disabled Oct 12 23:58:51.149860 kernel: thunder_xcv, ver 1.0 Oct 12 23:58:51.149878 kernel: thunder_bgx, ver 1.0 Oct 12 23:58:51.149895 kernel: nicpf, ver 1.0 Oct 12 23:58:51.149912 kernel: nicvf, ver 1.0 Oct 12 23:58:51.150099 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 12 23:58:51.150298 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-12T23:58:50 UTC (1760313530) Oct 12 23:58:51.150329 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 12 23:58:51.150348 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Oct 12 23:58:51.150366 kernel: NET: Registered PF_INET6 protocol family Oct 12 23:58:51.150384 kernel: watchdog: NMI not fully supported Oct 12 23:58:51.150401 kernel: Segment Routing with IPv6 Oct 12 23:58:51.150418 kernel: watchdog: Hard watchdog permanently disabled Oct 12 23:58:51.150436 kernel: In-situ OAM (IOAM) with IPv6 Oct 12 23:58:51.150453 kernel: NET: Registered PF_PACKET protocol family Oct 12 23:58:51.150471 kernel: Key type dns_resolver registered Oct 12 23:58:51.150492 kernel: registered taskstats version 1 Oct 12 23:58:51.150510 kernel: Loading compiled-in X.509 certificates Oct 12 23:58:51.150527 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: b8447a1087a9e9c4d5b9d4c2f2bba5a69a74f139' Oct 12 23:58:51.150545 kernel: Demotion targets for Node 0: null Oct 12 23:58:51.150562 kernel: Key type .fscrypt registered Oct 12 23:58:51.150579 kernel: Key type fscrypt-provisioning registered Oct 12 23:58:51.150596 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 12 23:58:51.150613 kernel: ima: Allocated hash algorithm: sha1 Oct 12 23:58:51.150631 kernel: ima: No architecture policies found Oct 12 23:58:51.150652 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 12 23:58:51.150670 kernel: clk: Disabling unused clocks Oct 12 23:58:51.150687 kernel: PM: genpd: Disabling unused power domains Oct 12 23:58:51.150705 kernel: Warning: unable to open an initial console. Oct 12 23:58:51.150722 kernel: Freeing unused kernel memory: 38976K Oct 12 23:58:51.150740 kernel: Run /init as init process Oct 12 23:58:51.150757 kernel: with arguments: Oct 12 23:58:51.150774 kernel: /init Oct 12 23:58:51.150791 kernel: with environment: Oct 12 23:58:51.150808 kernel: HOME=/ Oct 12 23:58:51.150830 kernel: TERM=linux Oct 12 23:58:51.150847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 12 23:58:51.150866 systemd[1]: Successfully made /usr/ read-only. Oct 12 23:58:51.150889 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 12 23:58:51.150909 systemd[1]: Detected virtualization amazon. Oct 12 23:58:51.150928 systemd[1]: Detected architecture arm64. Oct 12 23:58:51.150946 systemd[1]: Running in initrd. Oct 12 23:58:51.150968 systemd[1]: No hostname configured, using default hostname. Oct 12 23:58:51.150988 systemd[1]: Hostname set to . Oct 12 23:58:51.151006 systemd[1]: Initializing machine ID from VM UUID. Oct 12 23:58:51.151025 systemd[1]: Queued start job for default target initrd.target. Oct 12 23:58:51.151043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 12 23:58:51.151062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 12 23:58:51.151081 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 12 23:58:51.151101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 12 23:58:51.151124 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 12 23:58:51.151144 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 12 23:58:51.151165 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 12 23:58:51.151201 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 12 23:58:51.151226 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 12 23:58:51.151245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 12 23:58:51.151264 systemd[1]: Reached target paths.target - Path Units. Oct 12 23:58:51.151289 systemd[1]: Reached target slices.target - Slice Units. Oct 12 23:58:51.151308 systemd[1]: Reached target swap.target - Swaps. Oct 12 23:58:51.151326 systemd[1]: Reached target timers.target - Timer Units. Oct 12 23:58:51.151345 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 12 23:58:51.151364 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 12 23:58:51.151383 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 12 23:58:51.151401 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 12 23:58:51.151420 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 12 23:58:51.151443 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 12 23:58:51.151462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 12 23:58:51.151481 systemd[1]: Reached target sockets.target - Socket Units. Oct 12 23:58:51.151500 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 12 23:58:51.151518 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 12 23:58:51.151537 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 12 23:58:51.151557 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 12 23:58:51.151576 systemd[1]: Starting systemd-fsck-usr.service... Oct 12 23:58:51.151594 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 12 23:58:51.151618 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 12 23:58:51.151637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 12 23:58:51.151684 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 12 23:58:51.151706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 12 23:58:51.151730 systemd[1]: Finished systemd-fsck-usr.service. Oct 12 23:58:51.151750 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 12 23:58:51.151802 systemd-journald[258]: Collecting audit messages is disabled. Oct 12 23:58:51.151845 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 12 23:58:51.151868 kernel: Bridge firewalling registered Oct 12 23:58:51.151904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 12 23:58:51.151924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 12 23:58:51.151944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 12 23:58:51.151964 systemd-journald[258]: Journal started Oct 12 23:58:51.152004 systemd-journald[258]: Runtime Journal (/run/log/journal/ec291bfe4b790f45e6509d43475ecb04) is 8M, max 75.3M, 67.3M free. Oct 12 23:58:51.093317 systemd-modules-load[259]: Inserted module 'overlay' Oct 12 23:58:51.132332 systemd-modules-load[259]: Inserted module 'br_netfilter' Oct 12 23:58:51.160802 systemd[1]: Started systemd-journald.service - Journal Service. Oct 12 23:58:51.167891 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 12 23:58:51.176482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 12 23:58:51.183578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 12 23:58:51.192394 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 12 23:58:51.220952 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 12 23:58:51.231689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 12 23:58:51.246853 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 12 23:58:51.254339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 12 23:58:51.257551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 12 23:58:51.267280 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 12 23:58:51.274449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 12 23:58:51.316222 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 12 23:58:51.365422 systemd-resolved[299]: Positive Trust Anchors: Oct 12 23:58:51.365456 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 12 23:58:51.365522 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 12 23:58:51.490224 kernel: SCSI subsystem initialized Oct 12 23:58:51.498240 kernel: Loading iSCSI transport class v2.0-870. Oct 12 23:58:51.510226 kernel: iscsi: registered transport (tcp) Oct 12 23:58:51.532439 kernel: iscsi: registered transport (qla4xxx) Oct 12 23:58:51.532523 kernel: QLogic iSCSI HBA Driver Oct 12 23:58:51.567363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 12 23:58:51.601178 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 12 23:58:51.613733 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 12 23:58:51.653277 kernel: random: crng init done Oct 12 23:58:51.653477 systemd-resolved[299]: Defaulting to hostname 'linux'. Oct 12 23:58:51.657365 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 12 23:58:51.666720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 12 23:58:51.702252 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 12 23:58:51.709114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 12 23:58:51.796237 kernel: raid6: neonx8 gen() 6585 MB/s Oct 12 23:58:51.813238 kernel: raid6: neonx4 gen() 6594 MB/s Oct 12 23:58:51.830232 kernel: raid6: neonx2 gen() 5459 MB/s Oct 12 23:58:51.847233 kernel: raid6: neonx1 gen() 3965 MB/s Oct 12 23:58:51.864228 kernel: raid6: int64x8 gen() 3666 MB/s Oct 12 23:58:51.881233 kernel: raid6: int64x4 gen() 3714 MB/s Oct 12 23:58:51.898227 kernel: raid6: int64x2 gen() 3616 MB/s Oct 12 23:58:51.916316 kernel: raid6: int64x1 gen() 2771 MB/s Oct 12 23:58:51.916370 kernel: raid6: using algorithm neonx4 gen() 6594 MB/s Oct 12 23:58:51.935238 kernel: raid6: .... xor() 4881 MB/s, rmw enabled Oct 12 23:58:51.935301 kernel: raid6: using neon recovery algorithm Oct 12 23:58:51.943997 kernel: xor: measuring software checksum speed Oct 12 23:58:51.944063 kernel: 8regs : 12934 MB/sec Oct 12 23:58:51.945221 kernel: 32regs : 11536 MB/sec Oct 12 23:58:51.947499 kernel: arm64_neon : 8822 MB/sec Oct 12 23:58:51.947552 kernel: xor: using function: 8regs (12934 MB/sec) Oct 12 23:58:52.038237 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 12 23:58:52.050065 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 12 23:58:52.061363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 12 23:58:52.124481 systemd-udevd[508]: Using default interface naming scheme 'v255'. Oct 12 23:58:52.135282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 12 23:58:52.150431 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 12 23:58:52.186111 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Oct 12 23:58:52.231057 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 12 23:58:52.236555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 12 23:58:52.378238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 12 23:58:52.391732 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 12 23:58:52.542813 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 12 23:58:52.542882 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 12 23:58:52.550971 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 12 23:58:52.551408 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 12 23:58:52.561238 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d8:57:8a:32:65 Oct 12 23:58:52.565095 (udev-worker)[572]: Network interface NamePolicy= disabled on kernel command line. Oct 12 23:58:52.572602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 12 23:58:52.572863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 12 23:58:52.578728 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 12 23:58:52.589685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 12 23:58:52.599215 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 12 23:58:52.611016 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 12 23:58:52.611087 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 12 23:58:52.622225 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 12 23:58:52.633960 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 12 23:58:52.634035 kernel: GPT:9289727 != 33554431 Oct 12 23:58:52.634070 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 12 23:58:52.636218 kernel: GPT:9289727 != 33554431 Oct 12 23:58:52.636288 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 12 23:58:52.637225 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 12 23:58:52.642014 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 12 23:58:52.680451 kernel: nvme nvme0: using unchecked data buffer Oct 12 23:58:52.810154 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 12 23:58:52.866639 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 12 23:58:52.889692 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 12 23:58:52.892651 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 12 23:58:52.920209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 12 23:58:52.946230 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 12 23:58:52.953742 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 12 23:58:52.954949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 12 23:58:52.962878 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 12 23:58:52.970542 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 12 23:58:52.979588 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 12 23:58:53.005891 disk-uuid[688]: Primary Header is updated. Oct 12 23:58:53.005891 disk-uuid[688]: Secondary Entries is updated. Oct 12 23:58:53.005891 disk-uuid[688]: Secondary Header is updated. Oct 12 23:58:53.022816 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 12 23:58:53.021774 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 12 23:58:54.052265 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 12 23:58:54.052334 disk-uuid[691]: The operation has completed successfully. Oct 12 23:58:54.234409 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 12 23:58:54.234960 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 12 23:58:54.337984 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 12 23:58:54.362758 sh[956]: Success Oct 12 23:58:54.391304 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 12 23:58:54.391382 kernel: device-mapper: uevent: version 1.0.3 Oct 12 23:58:54.394112 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 12 23:58:54.406252 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 12 23:58:54.508951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 12 23:58:54.511351 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 12 23:58:54.538959 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 12 23:58:54.561243 kernel: BTRFS: device fsid e4495086-3456-43e0-be7b-4c3c53a67174 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (979) Oct 12 23:58:54.561307 kernel: BTRFS info (device dm-0): first mount of filesystem e4495086-3456-43e0-be7b-4c3c53a67174 Oct 12 23:58:54.565096 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 12 23:58:54.603180 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 12 23:58:54.603272 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 12 23:58:54.603299 kernel: BTRFS info (device dm-0): enabling free space tree Oct 12 23:58:54.618095 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 12 23:58:54.622239 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 12 23:58:54.627122 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 12 23:58:54.633408 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 12 23:58:54.646458 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 12 23:58:54.710266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Oct 12 23:58:54.714980 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 12 23:58:54.715064 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 12 23:58:54.734502 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 12 23:58:54.734596 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 12 23:58:54.742277 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 12 23:58:54.744498 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 12 23:58:54.756118 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 12 23:58:54.839145 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 12 23:58:54.849752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 12 23:58:54.926534 systemd-networkd[1149]: lo: Link UP Oct 12 23:58:54.929877 systemd-networkd[1149]: lo: Gained carrier Oct 12 23:58:54.937594 systemd-networkd[1149]: Enumeration completed Oct 12 23:58:54.937777 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 12 23:58:54.942454 systemd[1]: Reached target network.target - Network. Oct 12 23:58:54.954036 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 12 23:58:54.954044 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 12 23:58:54.978844 systemd-networkd[1149]: eth0: Link UP Oct 12 23:58:54.978858 systemd-networkd[1149]: eth0: Gained carrier Oct 12 23:58:54.978996 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 12 23:58:55.003297 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.31.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 12 23:58:55.051821 ignition[1093]: Ignition 2.22.0 Oct 12 23:58:55.051850 ignition[1093]: Stage: fetch-offline Oct 12 23:58:55.052787 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Oct 12 23:58:55.052810 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 12 23:58:55.055281 ignition[1093]: Ignition finished successfully Oct 12 23:58:55.065701 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 12 23:58:55.073413 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 12 23:58:55.134355 ignition[1160]: Ignition 2.22.0 Oct 12 23:58:55.134859 ignition[1160]: Stage: fetch Oct 12 23:58:55.135407 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Oct 12 23:58:55.135431 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 12 23:58:55.135562 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 12 23:58:55.156384 ignition[1160]: PUT result: OK Oct 12 23:58:55.159735 ignition[1160]: parsed url from cmdline: "" Oct 12 23:58:55.159751 ignition[1160]: no config URL provided Oct 12 23:58:55.159766 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Oct 12 23:58:55.159791 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Oct 12 23:58:55.159837 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 12 23:58:55.164311 ignition[1160]: PUT result: OK Oct 12 23:58:55.164385 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 12 23:58:55.173319 ignition[1160]: GET result: OK Oct 12 23:58:55.173527 ignition[1160]: parsing config with SHA512: 1b4d1f6f8592f0285c4090a8ccd5a3f0ec07e415a99eca80e3a46d1c7650466e0a48b55b70fcefeee38f731c15036005beeaea6e621eb549e1cdd88f744b3433 Oct 12 23:58:55.186537 unknown[1160]: fetched base config from "system" Oct 12 23:58:55.186564 unknown[1160]: fetched base config from "system" Oct 12 23:58:55.186577 unknown[1160]: fetched user config from "aws" Oct 12 23:58:55.190382 ignition[1160]: fetch: fetch complete Oct 12 23:58:55.190394 ignition[1160]: fetch: fetch passed Oct 12 23:58:55.190493 ignition[1160]: Ignition finished successfully Oct 12 23:58:55.198917 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 12 23:58:55.203974 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 12 23:58:55.252399 ignition[1166]: Ignition 2.22.0 Oct 12 23:58:55.252894 ignition[1166]: Stage: kargs Oct 12 23:58:55.253474 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Oct 12 23:58:55.253497 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 12 23:58:55.253624 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 12 23:58:55.265569 ignition[1166]: PUT result: OK Oct 12 23:58:55.270478 ignition[1166]: kargs: kargs passed Oct 12 23:58:55.270592 ignition[1166]: Ignition finished successfully Oct 12 23:58:55.274769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 12 23:58:55.283016 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 12 23:58:55.343747 ignition[1173]: Ignition 2.22.0 Oct 12 23:58:55.343772 ignition[1173]: Stage: disks Oct 12 23:58:55.344804 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Oct 12 23:58:55.344827 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 12 23:58:55.344964 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 12 23:58:55.356333 ignition[1173]: PUT result: OK Oct 12 23:58:55.361876 ignition[1173]: disks: disks passed Oct 12 23:58:55.362001 ignition[1173]: Ignition finished successfully Oct 12 23:58:55.366687 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 12 23:58:55.370534 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 12 23:58:55.377665 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 12 23:58:55.380512 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 12 23:58:55.387997 systemd[1]: Reached target sysinit.target - System Initialization. Oct 12 23:58:55.390420 systemd[1]: Reached target basic.target - Basic System. Oct 12 23:58:55.397327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 12 23:58:55.462855 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 12 23:58:55.471648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 12 23:58:55.481565 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 12 23:58:55.616219 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1aa1d0b4-cbac-4728-b9e0-662fa574e9ad r/w with ordered data mode. Quota mode: none. Oct 12 23:58:55.618030 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 12 23:58:55.622293 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 12 23:58:55.630541 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 12 23:58:55.634810 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 12 23:58:55.639243 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 12 23:58:55.643433 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 12 23:58:55.646728 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 12 23:58:55.665914 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 12 23:58:55.672432 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 12 23:58:55.688239 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Oct 12 23:58:55.692918 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 12 23:58:55.692973 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 12 23:58:55.705658 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 12 23:58:55.705732 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 12 23:58:55.708780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 12 23:58:55.778493 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Oct 12 23:58:55.788688 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Oct 12 23:58:55.797640 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Oct 12 23:58:55.805977 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Oct 12 23:58:55.961542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 12 23:58:55.969724 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 12 23:58:55.978489 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 12 23:58:56.008928 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 12 23:58:56.011668 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 12 23:58:56.041330 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 12 23:58:56.062228 ignition[1314]: INFO : Ignition 2.22.0 Oct 12 23:58:56.062228 ignition[1314]: INFO : Stage: mount Oct 12 23:58:56.062228 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 12 23:58:56.067986 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 12 23:58:56.067986 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 12 23:58:56.075335 ignition[1314]: INFO : PUT result: OK Oct 12 23:58:56.080518 ignition[1314]: INFO : mount: mount passed Oct 12 23:58:56.082434 ignition[1314]: INFO : Ignition finished successfully Oct 12 23:58:56.087097 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 12 23:58:56.091043 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 12 23:58:56.354407 systemd-networkd[1149]: eth0: Gained IPv6LL Oct 12 23:58:56.621096 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 12 23:58:56.672230 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Oct 12 23:58:56.677094 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 12 23:58:56.677299 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 12 23:58:56.684886 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 12 23:58:56.684969 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 12 23:58:56.688548 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 12 23:58:56.753635 ignition[1343]: INFO : Ignition 2.22.0 Oct 12 23:58:56.755700 ignition[1343]: INFO : Stage: files Oct 12 23:58:56.755700 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 12 23:58:56.755700 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 12 23:58:56.755700 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 12 23:58:56.766179 ignition[1343]: INFO : PUT result: OK Oct 12 23:58:56.771526 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Oct 12 23:58:56.774837 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 12 23:58:56.774837 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 12 23:58:56.785424 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 12 23:58:56.789565 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 12 23:58:56.793159 unknown[1343]: wrote ssh authorized keys file for user: core Oct 12 23:58:56.796037 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 12 23:58:56.799080 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 12 23:58:56.803609 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 12 23:58:56.859623 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 12 23:58:57.036255 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 12 23:58:57.036255 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 12 23:58:57.045900 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 12 23:58:57.045900 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 12 23:58:57.045900 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 12 23:58:57.045900 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 12 23:58:57.045900 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 12 23:58:57.045900 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 12 23:58:57.045900 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 12 23:58:57.073554 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 12 23:58:57.073554 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 12 23:58:57.073554 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 12 23:58:57.073554 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 12 23:58:57.073554 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 12 23:58:57.073554 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 12 23:58:57.520353 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 12 23:58:57.914711 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 12 23:58:57.914711 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 12 23:58:57.923254 ignition[1343]: INFO : files: files passed Oct 12 23:58:57.923254 ignition[1343]: INFO : Ignition finished successfully Oct 12 23:58:57.954749 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 12 23:58:57.960021 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 12 23:58:57.966286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 12 23:58:57.987248 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 12 23:58:57.989848 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 12 23:58:58.003235 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 12 23:58:58.003235 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 12 23:58:58.012506 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 12 23:58:58.018147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 12 23:58:58.024709 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 12 23:58:58.031149 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 12 23:58:58.113256 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 12 23:58:58.113708 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 12 23:58:58.122026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 12 23:58:58.127547 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 12 23:58:58.130513 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 12 23:58:58.131967 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 12 23:58:58.171756 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 12 23:58:58.180976 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 12 23:58:58.221609 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 12 23:58:58.225061 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 12 23:58:58.233307 systemd[1]: Stopped target timers.target - Timer Units. Oct 12 23:58:58.235619 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 12 23:58:58.235867 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 12 23:58:58.245555 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 12 23:58:58.248467 systemd[1]: Stopped target basic.target - Basic System. Oct 12 23:58:58.252314 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 12 23:58:58.259529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 12 23:58:58.262784 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 12 23:58:58.267841 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 12 23:58:58.275231 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 12 23:58:58.277784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 12 23:58:58.285881 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 12 23:58:58.288512 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 12 23:58:58.295482 systemd[1]: Stopped target swap.target - Swaps. Oct 12 23:58:58.298264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 12 23:58:58.298544 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 12 23:58:58.306699 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 12 23:58:58.307091 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 12 23:58:58.311844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 12 23:58:58.312101 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 12 23:58:58.319906 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 12 23:58:58.320156 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 12 23:58:58.324859 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 12 23:58:58.325182 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 12 23:58:58.328555 systemd[1]: ignition-files.service: Deactivated successfully. Oct 12 23:58:58.328826 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 12 23:58:58.338373 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 12 23:58:58.348389 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 12 23:58:58.352381 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 12 23:58:58.352768 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 12 23:58:58.355849 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 12 23:58:58.357459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 12 23:58:58.381617 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 12 23:58:58.385490 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 12 23:58:58.419097 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 12 23:58:58.427256 ignition[1396]: INFO : Ignition 2.22.0 Oct 12 23:58:58.427256 ignition[1396]: INFO : Stage: umount Oct 12 23:58:58.427256 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 12 23:58:58.427256 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 12 23:58:58.427256 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 12 23:58:58.439539 ignition[1396]: INFO : PUT result: OK Oct 12 23:58:58.444795 ignition[1396]: INFO : umount: umount passed Oct 12 23:58:58.446762 ignition[1396]: INFO : Ignition finished successfully Oct 12 23:58:58.453620 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 12 23:58:58.453868 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 12 23:58:58.457450 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 12 23:58:58.457541 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 12 23:58:58.460763 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 12 23:58:58.460870 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 12 23:58:58.466653 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 12 23:58:58.466761 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 12 23:58:58.471543 systemd[1]: Stopped target network.target - Network. Oct 12 23:58:58.475641 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 12 23:58:58.475910 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 12 23:58:58.483633 systemd[1]: Stopped target paths.target - Path Units. Oct 12 23:58:58.488163 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 12 23:58:58.490577 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 12 23:58:58.495985 systemd[1]: Stopped target slices.target - Slice Units. Oct 12 23:58:58.498680 systemd[1]: Stopped target sockets.target - Socket Units. Oct 12 23:58:58.505620 systemd[1]: iscsid.socket: Deactivated successfully. Oct 12 23:58:58.505704 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 12 23:58:58.508082 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 12 23:58:58.508158 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 12 23:58:58.513669 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 12 23:58:58.513792 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 12 23:58:58.516479 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 12 23:58:58.516576 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 12 23:58:58.523927 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 12 23:58:58.527585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 12 23:58:58.570663 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 12 23:58:58.571171 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 12 23:58:58.591384 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 12 23:58:58.594803 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 12 23:58:58.598308 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 12 23:58:58.607460 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 12 23:58:58.610810 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 12 23:58:58.613721 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 12 23:58:58.613812 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 12 23:58:58.619383 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 12 23:58:58.622543 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 12 23:58:58.630557 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 12 23:58:58.642297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 12 23:58:58.644570 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 12 23:58:58.650030 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 12 23:58:58.650132 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 12 23:58:58.652839 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 12 23:58:58.652920 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 12 23:58:58.656861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 12 23:58:58.676518 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 12 23:58:58.678429 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 12 23:58:58.679276 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 12 23:58:58.679816 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 12 23:58:58.694945 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 12 23:58:58.697366 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 12 23:58:58.709597 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 12 23:58:58.709905 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 12 23:58:58.718097 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 12 23:58:58.718248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 12 23:58:58.718438 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 12 23:58:58.725278 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 12 23:58:58.731961 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 12 23:58:58.732094 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 12 23:58:58.739249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 12 23:58:58.739842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 12 23:58:58.744956 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 12 23:58:58.745059 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 12 23:58:58.751665 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 12 23:58:58.760876 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 12 23:58:58.761033 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 12 23:58:58.767660 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 12 23:58:58.767761 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 12 23:58:58.775983 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 12 23:58:58.776108 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 12 23:58:58.782014 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 12 23:58:58.782118 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 12 23:58:58.787353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 12 23:58:58.787456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 12 23:58:58.804276 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 12 23:58:58.804413 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Oct 12 23:58:58.804503 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 12 23:58:58.804604 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 12 23:58:58.807651 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 12 23:58:58.810735 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 12 23:58:58.837170 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 12 23:58:58.839304 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 12 23:58:58.843051 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 12 23:58:58.849458 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 12 23:58:58.895512 systemd[1]: Switching root. Oct 12 23:58:58.945787 systemd-journald[258]: Journal stopped Oct 12 23:59:00.899268 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Oct 12 23:59:00.899426 kernel: SELinux: policy capability network_peer_controls=1 Oct 12 23:59:00.899483 kernel: SELinux: policy capability open_perms=1 Oct 12 23:59:00.899516 kernel: SELinux: policy capability extended_socket_class=1 Oct 12 23:59:00.899548 kernel: SELinux: policy capability always_check_network=0 Oct 12 23:59:00.899579 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 12 23:59:00.899618 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 12 23:59:00.899649 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 12 23:59:00.899679 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 12 23:59:00.899713 kernel: SELinux: policy capability userspace_initial_context=0 Oct 12 23:59:00.899746 systemd[1]: Successfully loaded SELinux policy in 82.228ms. Oct 12 23:59:00.899801 kernel: audit: type=1403 audit(1760313539.176:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 12 23:59:00.899834 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.679ms. Oct 12 23:59:00.899868 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 12 23:59:00.899907 systemd[1]: Detected virtualization amazon. Oct 12 23:59:00.899940 systemd[1]: Detected architecture arm64. Oct 12 23:59:00.899970 systemd[1]: Detected first boot. Oct 12 23:59:00.900004 systemd[1]: Initializing machine ID from VM UUID. Oct 12 23:59:00.900090 zram_generator::config[1440]: No configuration found. Oct 12 23:59:00.900135 kernel: NET: Registered PF_VSOCK protocol family Oct 12 23:59:00.900171 systemd[1]: Populated /etc with preset unit settings. Oct 12 23:59:00.900250 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 12 23:59:00.903018 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 12 23:59:00.903100 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 12 23:59:00.903135 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 12 23:59:00.903167 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 12 23:59:00.903325 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 12 23:59:00.903368 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 12 23:59:00.903404 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 12 23:59:00.903438 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 12 23:59:00.903468 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 12 23:59:00.903511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 12 23:59:00.903542 systemd[1]: Created slice user.slice - User and Session Slice. Oct 12 23:59:00.903573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 12 23:59:00.903604 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 12 23:59:00.903636 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 12 23:59:00.903667 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 12 23:59:00.903696 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 12 23:59:00.903729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 12 23:59:00.903759 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 12 23:59:00.903794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 12 23:59:00.903824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 12 23:59:00.903854 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 12 23:59:00.903882 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 12 23:59:00.903912 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 12 23:59:00.903942 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 12 23:59:00.903973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 12 23:59:00.904004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 12 23:59:00.904038 systemd[1]: Reached target slices.target - Slice Units. Oct 12 23:59:00.904107 systemd[1]: Reached target swap.target - Swaps. Oct 12 23:59:00.904143 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 12 23:59:00.904173 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 12 23:59:00.904508 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 12 23:59:00.904546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 12 23:59:00.904579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 12 23:59:00.904610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 12 23:59:00.904641 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 12 23:59:00.904678 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 12 23:59:00.904710 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 12 23:59:00.904740 systemd[1]: Mounting media.mount - External Media Directory... Oct 12 23:59:00.904770 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 12 23:59:00.904804 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 12 23:59:00.904905 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 12 23:59:00.904948 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 12 23:59:00.904980 systemd[1]: Reached target machines.target - Containers. Oct 12 23:59:00.905009 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 12 23:59:00.905043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 12 23:59:00.905074 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 12 23:59:00.905102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 12 23:59:00.905135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 12 23:59:00.905163 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 12 23:59:00.905225 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 12 23:59:00.905258 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 12 23:59:00.905286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 12 23:59:00.905321 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 12 23:59:00.905353 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 12 23:59:00.905381 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 12 23:59:00.905411 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 12 23:59:00.905450 systemd[1]: Stopped systemd-fsck-usr.service. Oct 12 23:59:00.905481 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 12 23:59:00.905512 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 12 23:59:00.905540 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 12 23:59:00.905568 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 12 23:59:00.905601 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 12 23:59:00.905632 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 12 23:59:00.905661 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 12 23:59:00.905696 systemd[1]: verity-setup.service: Deactivated successfully. Oct 12 23:59:00.905727 systemd[1]: Stopped verity-setup.service. Oct 12 23:59:00.905756 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 12 23:59:00.905784 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 12 23:59:00.905813 systemd[1]: Mounted media.mount - External Media Directory. Oct 12 23:59:00.905841 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 12 23:59:00.905880 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 12 23:59:00.905914 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 12 23:59:00.905945 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 12 23:59:00.905977 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 12 23:59:00.906005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 12 23:59:00.906035 kernel: fuse: init (API version 7.41) Oct 12 23:59:00.906064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 12 23:59:00.906092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 12 23:59:00.906123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 12 23:59:00.906152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 12 23:59:00.906251 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 12 23:59:00.906290 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 12 23:59:00.906318 kernel: ACPI: bus type drm_connector registered Oct 12 23:59:00.906357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 12 23:59:00.906388 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 12 23:59:00.906417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 12 23:59:00.906447 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 12 23:59:00.906542 systemd-journald[1523]: Collecting audit messages is disabled. Oct 12 23:59:00.906598 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 12 23:59:00.906631 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 12 23:59:00.906660 systemd-journald[1523]: Journal started Oct 12 23:59:00.906705 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec291bfe4b790f45e6509d43475ecb04) is 8M, max 75.3M, 67.3M free. Oct 12 23:59:00.271802 systemd[1]: Queued start job for default target multi-user.target. Oct 12 23:59:00.287927 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 12 23:59:00.288777 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 12 23:59:00.920152 systemd[1]: Started systemd-journald.service - Journal Service. Oct 12 23:59:00.931283 kernel: loop: module loaded Oct 12 23:59:00.931976 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 12 23:59:00.932699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 12 23:59:00.956516 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 12 23:59:00.968269 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 12 23:59:00.976394 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 12 23:59:00.982298 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 12 23:59:00.982386 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 12 23:59:00.991581 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 12 23:59:01.007142 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 12 23:59:01.012875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 12 23:59:01.019011 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 12 23:59:01.036444 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 12 23:59:01.042113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 12 23:59:01.051467 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 12 23:59:01.058617 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 12 23:59:01.061493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 12 23:59:01.071551 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 12 23:59:01.081283 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 12 23:59:01.096274 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 12 23:59:01.100775 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 12 23:59:01.103758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 12 23:59:01.121100 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec291bfe4b790f45e6509d43475ecb04 is 119.760ms for 934 entries. Oct 12 23:59:01.121100 systemd-journald[1523]: System Journal (/var/log/journal/ec291bfe4b790f45e6509d43475ecb04) is 8M, max 195.6M, 187.6M free. Oct 12 23:59:01.263832 systemd-journald[1523]: Received client request to flush runtime journal. Oct 12 23:59:01.263928 kernel: loop0: detected capacity change from 0 to 100632 Oct 12 23:59:01.263985 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 12 23:59:01.129399 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 12 23:59:01.135514 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 12 23:59:01.151885 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 12 23:59:01.272837 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 12 23:59:01.293424 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 12 23:59:01.314420 kernel: loop1: detected capacity change from 0 to 61264 Oct 12 23:59:01.298062 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 12 23:59:01.305614 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 12 23:59:01.341896 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Oct 12 23:59:01.341945 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Oct 12 23:59:01.346919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 12 23:59:01.360407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 12 23:59:01.370027 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 12 23:59:01.439291 kernel: loop2: detected capacity change from 0 to 119368 Oct 12 23:59:01.494762 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 12 23:59:01.518683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 12 23:59:01.525234 kernel: loop3: detected capacity change from 0 to 200800 Oct 12 23:59:01.573119 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Oct 12 23:59:01.573166 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Oct 12 23:59:01.579958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 12 23:59:01.714273 kernel: loop4: detected capacity change from 0 to 100632 Oct 12 23:59:01.742219 kernel: loop5: detected capacity change from 0 to 61264 Oct 12 23:59:01.771279 kernel: loop6: detected capacity change from 0 to 119368 Oct 12 23:59:01.805619 kernel: loop7: detected capacity change from 0 to 200800 Oct 12 23:59:01.851466 (sd-merge)[1601]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 12 23:59:01.853044 (sd-merge)[1601]: Merged extensions into '/usr'. Oct 12 23:59:01.865587 systemd[1]: Reload requested from client PID 1573 ('systemd-sysext') (unit systemd-sysext.service)... Oct 12 23:59:01.865620 systemd[1]: Reloading... Oct 12 23:59:02.118317 ldconfig[1568]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 12 23:59:02.130270 zram_generator::config[1630]: No configuration found. Oct 12 23:59:02.550718 systemd[1]: Reloading finished in 681 ms. Oct 12 23:59:02.588116 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 12 23:59:02.592312 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 12 23:59:02.595907 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 12 23:59:02.611282 systemd[1]: Starting ensure-sysext.service... Oct 12 23:59:02.618467 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 12 23:59:02.626366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 12 23:59:02.659429 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Oct 12 23:59:02.659466 systemd[1]: Reloading... Oct 12 23:59:02.668821 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 12 23:59:02.669426 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 12 23:59:02.670229 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 12 23:59:02.670981 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 12 23:59:02.673061 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 12 23:59:02.673873 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Oct 12 23:59:02.674128 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Oct 12 23:59:02.682249 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Oct 12 23:59:02.682288 systemd-tmpfiles[1681]: Skipping /boot Oct 12 23:59:02.701586 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Oct 12 23:59:02.702037 systemd-tmpfiles[1681]: Skipping /boot Oct 12 23:59:02.764490 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Oct 12 23:59:02.826250 zram_generator::config[1714]: No configuration found. Oct 12 23:59:03.105377 (udev-worker)[1732]: Network interface NamePolicy= disabled on kernel command line. Oct 12 23:59:03.525156 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 12 23:59:03.525800 systemd[1]: Reloading finished in 865 ms. Oct 12 23:59:03.602586 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 12 23:59:03.607615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 12 23:59:03.731589 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 12 23:59:03.738658 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 12 23:59:03.742645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 12 23:59:03.746051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 12 23:59:03.757640 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 12 23:59:03.800862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 12 23:59:03.803594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 12 23:59:03.803845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 12 23:59:03.809830 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 12 23:59:03.820960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 12 23:59:03.830436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 12 23:59:03.838858 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 12 23:59:03.848502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 12 23:59:03.848960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 12 23:59:03.867877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 12 23:59:03.868425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 12 23:59:03.881685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 12 23:59:03.886081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 12 23:59:03.888731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 12 23:59:03.889086 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 12 23:59:03.890523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 12 23:59:03.894002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 12 23:59:03.902110 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 12 23:59:03.903369 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 12 23:59:03.932754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 12 23:59:03.945818 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 12 23:59:03.951760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 12 23:59:03.962030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 12 23:59:03.964656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 12 23:59:03.964909 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 12 23:59:03.965296 systemd[1]: Reached target time-set.target - System Time Set. Oct 12 23:59:03.976847 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 12 23:59:04.027459 systemd[1]: Finished ensure-sysext.service. Oct 12 23:59:04.075315 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 12 23:59:04.103352 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 12 23:59:04.108009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 12 23:59:04.108551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 12 23:59:04.118440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 12 23:59:04.121313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 12 23:59:04.125178 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 12 23:59:04.127292 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 12 23:59:04.174298 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 12 23:59:04.174790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 12 23:59:04.193992 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 12 23:59:04.200342 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 12 23:59:04.214216 augenrules[1942]: No rules Oct 12 23:59:04.219622 systemd[1]: audit-rules.service: Deactivated successfully. Oct 12 23:59:04.220391 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 12 23:59:04.233422 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 12 23:59:04.234008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 12 23:59:04.237269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 12 23:59:04.241462 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 12 23:59:04.243945 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 12 23:59:04.296315 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 12 23:59:04.315031 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 12 23:59:04.350614 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 12 23:59:04.355042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 12 23:59:04.479617 systemd-networkd[1898]: lo: Link UP Oct 12 23:59:04.479633 systemd-networkd[1898]: lo: Gained carrier Oct 12 23:59:04.483680 systemd-networkd[1898]: Enumeration completed Oct 12 23:59:04.483891 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 12 23:59:04.489177 systemd-networkd[1898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 12 23:59:04.489362 systemd-networkd[1898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 12 23:59:04.491303 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 12 23:59:04.492436 systemd-resolved[1899]: Positive Trust Anchors: Oct 12 23:59:04.492460 systemd-resolved[1899]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 12 23:59:04.492519 systemd-resolved[1899]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 12 23:59:04.499170 systemd-networkd[1898]: eth0: Link UP Oct 12 23:59:04.499693 systemd-networkd[1898]: eth0: Gained carrier Oct 12 23:59:04.499731 systemd-networkd[1898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 12 23:59:04.501531 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 12 23:59:04.514334 systemd-networkd[1898]: eth0: DHCPv4 address 172.31.31.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 12 23:59:04.522948 systemd-resolved[1899]: Defaulting to hostname 'linux'. Oct 12 23:59:04.526799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 12 23:59:04.532369 systemd[1]: Reached target network.target - Network. Oct 12 23:59:04.534690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 12 23:59:04.539380 systemd[1]: Reached target sysinit.target - System Initialization. Oct 12 23:59:04.542269 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 12 23:59:04.545498 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 12 23:59:04.548996 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 12 23:59:04.551895 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 12 23:59:04.554978 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 12 23:59:04.558006 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 12 23:59:04.558061 systemd[1]: Reached target paths.target - Path Units. Oct 12 23:59:04.560325 systemd[1]: Reached target timers.target - Timer Units. Oct 12 23:59:04.564757 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 12 23:59:04.570410 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 12 23:59:04.577451 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 12 23:59:04.580950 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 12 23:59:04.584232 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 12 23:59:04.596520 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 12 23:59:04.600441 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 12 23:59:04.604846 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 12 23:59:04.608724 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 12 23:59:04.612359 systemd[1]: Reached target sockets.target - Socket Units. Oct 12 23:59:04.617383 systemd[1]: Reached target basic.target - Basic System. Oct 12 23:59:04.619997 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 12 23:59:04.620514 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 12 23:59:04.622714 systemd[1]: Starting containerd.service - containerd container runtime... Oct 12 23:59:04.629480 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 12 23:59:04.635471 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 12 23:59:04.641766 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 12 23:59:04.650563 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 12 23:59:04.656626 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 12 23:59:04.657604 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 12 23:59:04.671773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 12 23:59:04.678502 systemd[1]: Started ntpd.service - Network Time Service. Oct 12 23:59:04.692209 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 12 23:59:04.702399 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 12 23:59:04.712617 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 12 23:59:04.751574 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 12 23:59:04.767611 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 12 23:59:04.774137 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 12 23:59:04.775062 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 12 23:59:04.780561 systemd[1]: Starting update-engine.service - Update Engine... Oct 12 23:59:04.792816 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 12 23:59:04.799879 jq[1974]: false Oct 12 23:59:04.804285 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 12 23:59:04.827161 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 12 23:59:04.841678 extend-filesystems[1975]: Found /dev/nvme0n1p6 Oct 12 23:59:04.833683 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 12 23:59:04.867955 tar[1994]: linux-arm64/LICENSE Oct 12 23:59:04.867955 tar[1994]: linux-arm64/helm Oct 12 23:59:04.880960 extend-filesystems[1975]: Found /dev/nvme0n1p9 Oct 12 23:59:04.885693 jq[1990]: true Oct 12 23:59:04.897793 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 12 23:59:04.898275 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 12 23:59:04.914219 coreos-metadata[1971]: Oct 12 23:59:04.909 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 12 23:59:04.916822 coreos-metadata[1971]: Oct 12 23:59:04.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 12 23:59:04.919474 extend-filesystems[1975]: Checking size of /dev/nvme0n1p9 Oct 12 23:59:04.923480 coreos-metadata[1971]: Oct 12 23:59:04.921 INFO Fetch successful Oct 12 23:59:04.923480 coreos-metadata[1971]: Oct 12 23:59:04.921 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 12 23:59:04.935392 coreos-metadata[1971]: Oct 12 23:59:04.927 INFO Fetch successful Oct 12 23:59:04.935392 coreos-metadata[1971]: Oct 12 23:59:04.927 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 12 23:59:04.935778 coreos-metadata[1971]: Oct 12 23:59:04.935 INFO Fetch successful Oct 12 23:59:04.935778 coreos-metadata[1971]: Oct 12 23:59:04.935 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 12 23:59:04.944803 coreos-metadata[1971]: Oct 12 23:59:04.944 INFO Fetch successful Oct 12 23:59:04.944803 coreos-metadata[1971]: Oct 12 23:59:04.944 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 12 23:59:04.950240 coreos-metadata[1971]: Oct 12 23:59:04.948 INFO Fetch failed with 404: resource not found Oct 12 23:59:04.950240 coreos-metadata[1971]: Oct 12 23:59:04.948 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 12 23:59:04.952972 coreos-metadata[1971]: Oct 12 23:59:04.952 INFO Fetch successful Oct 12 23:59:04.952972 coreos-metadata[1971]: Oct 12 23:59:04.952 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 12 23:59:04.965080 coreos-metadata[1971]: Oct 12 23:59:04.961 INFO Fetch successful Oct 12 23:59:04.965080 coreos-metadata[1971]: Oct 12 23:59:04.961 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 12 23:59:04.965080 coreos-metadata[1971]: Oct 12 23:59:04.964 INFO Fetch successful Oct 12 23:59:04.965080 coreos-metadata[1971]: Oct 12 23:59:04.964 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 12 23:59:04.966077 coreos-metadata[1971]: Oct 12 23:59:04.966 INFO Fetch successful Oct 12 23:59:04.966077 coreos-metadata[1971]: Oct 12 23:59:04.966 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 12 23:59:04.970370 coreos-metadata[1971]: Oct 12 23:59:04.970 INFO Fetch successful Oct 12 23:59:05.009059 dbus-daemon[1972]: [system] SELinux support is enabled Oct 12 23:59:05.017422 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 12 23:59:05.025645 jq[2009]: true Oct 12 23:59:05.036632 systemd[1]: motdgen.service: Deactivated successfully. Oct 12 23:59:05.050069 extend-filesystems[1975]: Resized partition /dev/nvme0n1p9 Oct 12 23:59:05.038332 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 12 23:59:05.051634 dbus-daemon[1972]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1898 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 12 23:59:05.065145 extend-filesystems[2025]: resize2fs 1.47.3 (8-Jul-2025) Oct 12 23:59:05.047700 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 12 23:59:05.047811 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 12 23:59:05.050940 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 12 23:59:05.050975 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 12 23:59:05.083106 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 12 23:59:05.093225 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Oct 12 23:59:05.101232 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 12 23:59:05.105486 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 12 23:59:05.123145 ntpd[1977]: ntpd 4.2.8p18@1.4062-o Sun Oct 12 22:02:10 UTC 2025 (1): Starting Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: ntpd 4.2.8p18@1.4062-o Sun Oct 12 22:02:10 UTC 2025 (1): Starting Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: ---------------------------------------------------- Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: ntp-4 is maintained by Network Time Foundation, Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: corporation. Support and training for ntp-4 are Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: available at https://www.nwtime.org/support Oct 12 23:59:05.124751 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: ---------------------------------------------------- Oct 12 23:59:05.121623 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 12 23:59:05.123286 ntpd[1977]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 12 23:59:05.123307 ntpd[1977]: ---------------------------------------------------- Oct 12 23:59:05.123325 ntpd[1977]: ntp-4 is maintained by Network Time Foundation, Oct 12 23:59:05.123341 ntpd[1977]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 12 23:59:05.123358 ntpd[1977]: corporation. Support and training for ntp-4 are Oct 12 23:59:05.123374 ntpd[1977]: available at https://www.nwtime.org/support Oct 12 23:59:05.123390 ntpd[1977]: ---------------------------------------------------- Oct 12 23:59:05.140826 ntpd[1977]: proto: precision = 0.096 usec (-23) Oct 12 23:59:05.142341 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: proto: precision = 0.096 usec (-23) Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: basedate set to 2025-09-30 Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: gps base set to 2025-10-05 (week 2387) Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: Listen and drop on 0 v6wildcard [::]:123 Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: Listen normally on 2 lo 127.0.0.1:123 Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: Listen normally on 3 eth0 172.31.31.230:123 Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: Listen normally on 4 lo [::1]:123 Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: bind(21) AF_INET6 [fe80::4d8:57ff:fe8a:3265%2]:123 flags 0x811 failed: Cannot assign requested address Oct 12 23:59:05.156442 ntpd[1977]: 12 Oct 23:59:05 ntpd[1977]: unable to create socket on eth0 (5) for [fe80::4d8:57ff:fe8a:3265%2]:123 Oct 12 23:59:05.148392 ntpd[1977]: basedate set to 2025-09-30 Oct 12 23:59:05.148428 ntpd[1977]: gps base set to 2025-10-05 (week 2387) Oct 12 23:59:05.148613 ntpd[1977]: Listen and drop on 0 v6wildcard [::]:123 Oct 12 23:59:05.148658 ntpd[1977]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 12 23:59:05.148953 ntpd[1977]: Listen normally on 2 lo 127.0.0.1:123 Oct 12 23:59:05.148996 ntpd[1977]: Listen normally on 3 eth0 172.31.31.230:123 Oct 12 23:59:05.149040 ntpd[1977]: Listen normally on 4 lo [::1]:123 Oct 12 23:59:05.149084 ntpd[1977]: bind(21) AF_INET6 [fe80::4d8:57ff:fe8a:3265%2]:123 flags 0x811 failed: Cannot assign requested address Oct 12 23:59:05.161674 systemd-coredump[2036]: Process 1977 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Oct 12 23:59:05.149119 ntpd[1977]: unable to create socket on eth0 (5) for [fe80::4d8:57ff:fe8a:3265%2]:123 Oct 12 23:59:05.179507 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Oct 12 23:59:05.192566 systemd[1]: Started systemd-coredump@0-2036-0.service - Process Core Dump (PID 2036/UID 0). Oct 12 23:59:05.200463 update_engine[1988]: I20251012 23:59:05.198043 1988 main.cc:92] Flatcar Update Engine starting Oct 12 23:59:05.223114 systemd[1]: Started update-engine.service - Update Engine. Oct 12 23:59:05.233472 update_engine[1988]: I20251012 23:59:05.233173 1988 update_check_scheduler.cc:74] Next update check in 10m3s Oct 12 23:59:05.290341 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 12 23:59:05.294308 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 12 23:59:05.298107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 12 23:59:05.388411 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Oct 12 23:59:05.407340 systemd-logind[1987]: Watching system buttons on /dev/input/event0 (Power Button) Oct 12 23:59:05.408267 systemd-logind[1987]: Watching system buttons on /dev/input/event1 (Sleep Button) Oct 12 23:59:05.413243 extend-filesystems[2025]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 12 23:59:05.413243 extend-filesystems[2025]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 12 23:59:05.413243 extend-filesystems[2025]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Oct 12 23:59:05.439525 extend-filesystems[1975]: Resized filesystem in /dev/nvme0n1p9 Oct 12 23:59:05.443646 bash[2058]: Updated "/home/core/.ssh/authorized_keys" Oct 12 23:59:05.414540 systemd-logind[1987]: New seat seat0. Oct 12 23:59:05.452788 systemd[1]: Started systemd-logind.service - User Login Management. Oct 12 23:59:05.457050 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 12 23:59:05.457489 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 12 23:59:05.462299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 12 23:59:05.483614 systemd[1]: Starting sshkeys.service... Oct 12 23:59:05.567336 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 12 23:59:05.580112 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 12 23:59:05.728626 containerd[2018]: time="2025-10-12T23:59:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 12 23:59:05.735357 containerd[2018]: time="2025-10-12T23:59:05.734614187Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 12 23:59:05.797636 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 12 23:59:05.803532 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 12 23:59:05.815789 dbus-daemon[1972]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 12 23:59:05.831921 systemd[1]: Starting polkit.service - Authorization Manager... Oct 12 23:59:05.872978 containerd[2018]: time="2025-10-12T23:59:05.872895420Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.436µs" Oct 12 23:59:05.872978 containerd[2018]: time="2025-10-12T23:59:05.872965944Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 12 23:59:05.873145 containerd[2018]: time="2025-10-12T23:59:05.873004752Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 12 23:59:05.873786 containerd[2018]: time="2025-10-12T23:59:05.873356196Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 12 23:59:05.873786 containerd[2018]: time="2025-10-12T23:59:05.873410448Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 12 23:59:05.873786 containerd[2018]: time="2025-10-12T23:59:05.873466788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 12 23:59:05.873786 containerd[2018]: time="2025-10-12T23:59:05.873589248Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 12 23:59:05.873786 containerd[2018]: time="2025-10-12T23:59:05.873616572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.874047024Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.874081584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.874113084Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.874139724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.875289564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.875748012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.875822940Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.875852940Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.875945820Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.876478512Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 12 23:59:05.883446 containerd[2018]: time="2025-10-12T23:59:05.876664236Z" level=info msg="metadata content store policy set" policy=shared Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900563988Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900679392Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900712908Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900741300Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900772164Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900800748Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900852684Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900894696Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900927036Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900953352Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.900977244Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.901006200Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.901310196Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 12 23:59:05.903760 containerd[2018]: time="2025-10-12T23:59:05.901359924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901397064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901425300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901452648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901483212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901512408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901538844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901568556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901597176Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.901623612Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.902013936Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.902051904Z" level=info msg="Start snapshots syncer" Oct 12 23:59:05.904488 containerd[2018]: time="2025-10-12T23:59:05.902118504Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 12 23:59:05.904980 containerd[2018]: time="2025-10-12T23:59:05.902505948Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 12 23:59:05.904980 containerd[2018]: time="2025-10-12T23:59:05.902602392Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 12 23:59:05.912394 containerd[2018]: time="2025-10-12T23:59:05.902743752Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 12 23:59:05.912394 containerd[2018]: time="2025-10-12T23:59:05.902986140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 12 23:59:05.912394 containerd[2018]: time="2025-10-12T23:59:05.903035748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 12 23:59:05.912394 containerd[2018]: time="2025-10-12T23:59:05.903064092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 12 23:59:05.912394 containerd[2018]: time="2025-10-12T23:59:05.903091788Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 12 23:59:05.912394 containerd[2018]: time="2025-10-12T23:59:05.903120612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 12 23:59:05.912394 containerd[2018]: time="2025-10-12T23:59:05.903148476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.903177720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915381168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915448524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915520068Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915642624Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915703080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915730092Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915758328Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915811956Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915842580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.915907068Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.916129752Z" level=info msg="runtime interface created" Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.916153008Z" level=info msg="created NRI interface" Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.916177836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 12 23:59:05.916307 containerd[2018]: time="2025-10-12T23:59:05.916247076Z" level=info msg="Connect containerd service" Oct 12 23:59:05.919050 containerd[2018]: time="2025-10-12T23:59:05.917047368Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 12 23:59:05.926533 containerd[2018]: time="2025-10-12T23:59:05.926458896Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 12 23:59:06.007040 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 12 23:59:06.132845 coreos-metadata[2080]: Oct 12 23:59:06.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 12 23:59:06.147228 coreos-metadata[2080]: Oct 12 23:59:06.137 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 12 23:59:06.148283 coreos-metadata[2080]: Oct 12 23:59:06.148 INFO Fetch successful Oct 12 23:59:06.148283 coreos-metadata[2080]: Oct 12 23:59:06.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 12 23:59:06.154284 coreos-metadata[2080]: Oct 12 23:59:06.152 INFO Fetch successful Oct 12 23:59:06.167143 unknown[2080]: wrote ssh authorized keys file for user: core Oct 12 23:59:06.280369 update-ssh-keys[2183]: Updated "/home/core/.ssh/authorized_keys" Oct 12 23:59:06.282206 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 12 23:59:06.291727 systemd[1]: Finished sshkeys.service. Oct 12 23:59:06.325458 locksmithd[2048]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.351778474Z" level=info msg="Start subscribing containerd event" Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.351875170Z" level=info msg="Start recovering state" Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.352006426Z" level=info msg="Start event monitor" Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.352031650Z" level=info msg="Start cni network conf syncer for default" Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.352074370Z" level=info msg="Start streaming server" Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.352097278Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.352114774Z" level=info msg="runtime interface starting up..." Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.352130038Z" level=info msg="starting plugins..." Oct 12 23:59:06.352599 containerd[2018]: time="2025-10-12T23:59:06.352159390Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 12 23:59:06.355310 containerd[2018]: time="2025-10-12T23:59:06.354777766Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 12 23:59:06.355310 containerd[2018]: time="2025-10-12T23:59:06.354919150Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 12 23:59:06.365544 containerd[2018]: time="2025-10-12T23:59:06.362865934Z" level=info msg="containerd successfully booted in 0.634956s" Oct 12 23:59:06.362985 systemd[1]: Started containerd.service - containerd container runtime. Oct 12 23:59:06.393788 systemd-coredump[2042]: Process 1977 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1977: #0 0x0000aaaab72f0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab729fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab72a0240 n/a (ntpd + 0x10240) #3 0x0000aaaab729be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab729d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab72a5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab729738c n/a (ntpd + 0x738c) #7 0x0000ffff94a62034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff94a62118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab72973f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Oct 12 23:59:06.401373 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Oct 12 23:59:06.401666 systemd[1]: ntpd.service: Failed with result 'core-dump'. Oct 12 23:59:06.402415 systemd-networkd[1898]: eth0: Gained IPv6LL Oct 12 23:59:06.422281 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 12 23:59:06.426389 systemd[1]: systemd-coredump@0-2036-0.service: Deactivated successfully. Oct 12 23:59:06.443466 systemd[1]: Reached target network-online.target - Network is Online. Oct 12 23:59:06.449267 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 12 23:59:06.460081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 12 23:59:06.466790 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 12 23:59:06.512509 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Oct 12 23:59:06.516754 systemd[1]: Started ntpd.service - Network Time Service. Oct 12 23:59:06.536593 polkitd[2142]: Started polkitd version 126 Oct 12 23:59:06.580338 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 12 23:59:06.602491 ntpd[2201]: ntpd 4.2.8p18@1.4062-o Sun Oct 12 22:02:10 UTC 2025 (1): Starting Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: ntpd 4.2.8p18@1.4062-o Sun Oct 12 22:02:10 UTC 2025 (1): Starting Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: ---------------------------------------------------- Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: ntp-4 is maintained by Network Time Foundation, Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: corporation. Support and training for ntp-4 are Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: available at https://www.nwtime.org/support Oct 12 23:59:06.603794 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: ---------------------------------------------------- Oct 12 23:59:06.602613 ntpd[2201]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: proto: precision = 0.096 usec (-23) Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: basedate set to 2025-09-30 Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: gps base set to 2025-10-05 (week 2387) Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Listen and drop on 0 v6wildcard [::]:123 Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Listen normally on 2 lo 127.0.0.1:123 Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Listen normally on 3 eth0 172.31.31.230:123 Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Listen normally on 4 lo [::1]:123 Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Listen normally on 5 eth0 [fe80::4d8:57ff:fe8a:3265%2]:123 Oct 12 23:59:06.606018 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: Listening on routing socket on fd #22 for interface updates Oct 12 23:59:06.602633 ntpd[2201]: ---------------------------------------------------- Oct 12 23:59:06.615527 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 12 23:59:06.615527 ntpd[2201]: 12 Oct 23:59:06 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 12 23:59:06.602650 ntpd[2201]: ntp-4 is maintained by Network Time Foundation, Oct 12 23:59:06.602667 ntpd[2201]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 12 23:59:06.602685 ntpd[2201]: corporation. Support and training for ntp-4 are Oct 12 23:59:06.602702 ntpd[2201]: available at https://www.nwtime.org/support Oct 12 23:59:06.602719 ntpd[2201]: ---------------------------------------------------- Oct 12 23:59:06.602995 polkitd[2142]: Loading rules from directory /etc/polkit-1/rules.d Oct 12 23:59:06.603829 ntpd[2201]: proto: precision = 0.096 usec (-23) Oct 12 23:59:06.604175 ntpd[2201]: basedate set to 2025-09-30 Oct 12 23:59:06.604224 ntpd[2201]: gps base set to 2025-10-05 (week 2387) Oct 12 23:59:06.604349 ntpd[2201]: Listen and drop on 0 v6wildcard [::]:123 Oct 12 23:59:06.604392 ntpd[2201]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 12 23:59:06.604661 ntpd[2201]: Listen normally on 2 lo 127.0.0.1:123 Oct 12 23:59:06.604705 ntpd[2201]: Listen normally on 3 eth0 172.31.31.230:123 Oct 12 23:59:06.604747 ntpd[2201]: Listen normally on 4 lo [::1]:123 Oct 12 23:59:06.604788 ntpd[2201]: Listen normally on 5 eth0 [fe80::4d8:57ff:fe8a:3265%2]:123 Oct 12 23:59:06.604828 ntpd[2201]: Listening on routing socket on fd #22 for interface updates Oct 12 23:59:06.610748 polkitd[2142]: Loading rules from directory /run/polkit-1/rules.d Oct 12 23:59:06.610881 polkitd[2142]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 12 23:59:06.611581 polkitd[2142]: Loading rules from directory /usr/local/share/polkit-1/rules.d Oct 12 23:59:06.611644 polkitd[2142]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 12 23:59:06.611729 polkitd[2142]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 12 23:59:06.613282 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 12 23:59:06.613328 ntpd[2201]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 12 23:59:06.623587 polkitd[2142]: Finished loading, compiling and executing 2 rules Oct 12 23:59:06.623984 systemd[1]: Started polkit.service - Authorization Manager. Oct 12 23:59:06.633466 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 12 23:59:06.637302 polkitd[2142]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 12 23:59:06.698811 systemd-resolved[1899]: System hostname changed to 'ip-172-31-31-230'. Oct 12 23:59:06.698816 systemd-hostnamed[2032]: Hostname set to (transient) Oct 12 23:59:06.762977 amazon-ssm-agent[2197]: Initializing new seelog logger Oct 12 23:59:06.766512 amazon-ssm-agent[2197]: New Seelog Logger Creation Complete Oct 12 23:59:06.766512 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.766512 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.766512 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 processing appconfig overrides Oct 12 23:59:06.767476 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.768029 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.7673 INFO Proxy environment variables: Oct 12 23:59:06.768318 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.768562 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 processing appconfig overrides Oct 12 23:59:06.770661 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.770661 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.770946 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 processing appconfig overrides Oct 12 23:59:06.779512 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.779777 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:06.780571 amazon-ssm-agent[2197]: 2025/10/12 23:59:06 processing appconfig overrides Oct 12 23:59:06.877010 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.7674 INFO https_proxy: Oct 12 23:59:06.976422 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.7674 INFO http_proxy: Oct 12 23:59:07.076880 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.7674 INFO no_proxy: Oct 12 23:59:07.176713 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.7690 INFO Checking if agent identity type OnPrem can be assumed Oct 12 23:59:07.278326 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.7699 INFO Checking if agent identity type EC2 can be assumed Oct 12 23:59:07.314045 tar[1994]: linux-arm64/README.md Oct 12 23:59:07.359526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 12 23:59:07.377670 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9488 INFO Agent will take identity from EC2 Oct 12 23:59:07.476930 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9548 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Oct 12 23:59:07.577250 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9549 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Oct 12 23:59:07.676486 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9549 INFO [amazon-ssm-agent] Starting Core Agent Oct 12 23:59:07.777085 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9549 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Oct 12 23:59:07.814400 amazon-ssm-agent[2197]: 2025/10/12 23:59:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:07.814400 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 12 23:59:07.814611 amazon-ssm-agent[2197]: 2025/10/12 23:59:07 processing appconfig overrides Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9549 INFO [Registrar] Starting registrar module Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9567 INFO [EC2Identity] Checking disk for registration info Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9567 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:06.9567 INFO [EC2Identity] Generating registration keypair Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.7656 INFO [EC2Identity] Checking write access before registering Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.7664 INFO [EC2Identity] Registering EC2 instance with Systems Manager Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.8140 INFO [EC2Identity] EC2 registration was successful. Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.8141 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.8142 INFO [CredentialRefresher] credentialRefresher has started Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.8142 INFO [CredentialRefresher] Starting credentials refresher loop Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.8670 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 12 23:59:07.873220 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.8711 INFO [CredentialRefresher] Credentials ready Oct 12 23:59:07.877228 amazon-ssm-agent[2197]: 2025-10-12 23:59:07.8724 INFO [CredentialRefresher] Next credential rotation will be in 29.9999198301 minutes Oct 12 23:59:08.348479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:08.367864 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 12 23:59:08.749422 sshd_keygen[2013]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 12 23:59:08.795315 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 12 23:59:08.802697 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 12 23:59:08.807502 systemd[1]: Started sshd@0-172.31.31.230:22-139.178.89.65:48444.service - OpenSSH per-connection server daemon (139.178.89.65:48444). Oct 12 23:59:08.852082 systemd[1]: issuegen.service: Deactivated successfully. Oct 12 23:59:08.854114 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 12 23:59:08.863484 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 12 23:59:08.918597 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 12 23:59:08.930488 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 12 23:59:08.938176 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 12 23:59:08.941753 systemd[1]: Reached target getty.target - Login Prompts. Oct 12 23:59:08.944468 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 12 23:59:08.948310 systemd[1]: Startup finished in 3.647s (kernel) + 8.466s (initrd) + 9.855s (userspace) = 21.969s. Oct 12 23:59:08.963516 amazon-ssm-agent[2197]: 2025-10-12 23:59:08.9629 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 12 23:59:09.065339 amazon-ssm-agent[2197]: 2025-10-12 23:59:08.9708 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2263) started Oct 12 23:59:09.086326 sshd[2250]: Accepted publickey for core from 139.178.89.65 port 48444 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 12 23:59:09.096013 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 12 23:59:09.123156 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 12 23:59:09.128177 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 12 23:59:09.165869 systemd-logind[1987]: New session 1 of user core. Oct 12 23:59:09.169890 amazon-ssm-agent[2197]: 2025-10-12 23:59:08.9709 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 12 23:59:09.181807 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 12 23:59:09.192689 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 12 23:59:09.219851 (systemd)[2274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 12 23:59:09.232345 systemd-logind[1987]: New session c1 of user core. Oct 12 23:59:09.542150 kubelet[2236]: E1012 23:59:09.542090 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 12 23:59:09.546002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 12 23:59:09.546349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 12 23:59:09.548461 systemd[1]: kubelet.service: Consumed 1.343s CPU time, 249.5M memory peak. Oct 12 23:59:09.618462 systemd[2274]: Queued start job for default target default.target. Oct 12 23:59:09.628062 systemd[2274]: Created slice app.slice - User Application Slice. Oct 12 23:59:09.628135 systemd[2274]: Reached target paths.target - Paths. Oct 12 23:59:09.628253 systemd[2274]: Reached target timers.target - Timers. Oct 12 23:59:09.630866 systemd[2274]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 12 23:59:09.669801 systemd[2274]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 12 23:59:09.670409 systemd[2274]: Reached target sockets.target - Sockets. Oct 12 23:59:09.670704 systemd[2274]: Reached target basic.target - Basic System. Oct 12 23:59:09.670964 systemd[2274]: Reached target default.target - Main User Target. Oct 12 23:59:09.671010 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 12 23:59:09.671282 systemd[2274]: Startup finished in 417ms. Oct 12 23:59:09.680576 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 12 23:59:09.838733 systemd[1]: Started sshd@1-172.31.31.230:22-139.178.89.65:48448.service - OpenSSH per-connection server daemon (139.178.89.65:48448). Oct 12 23:59:10.058916 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 48448 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 12 23:59:10.062048 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 12 23:59:10.073303 systemd-logind[1987]: New session 2 of user core. Oct 12 23:59:10.085569 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 12 23:59:10.211638 sshd[2295]: Connection closed by 139.178.89.65 port 48448 Oct 12 23:59:10.212618 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Oct 12 23:59:10.220439 systemd[1]: sshd@1-172.31.31.230:22-139.178.89.65:48448.service: Deactivated successfully. Oct 12 23:59:10.225626 systemd[1]: session-2.scope: Deactivated successfully. Oct 12 23:59:10.227334 systemd-logind[1987]: Session 2 logged out. Waiting for processes to exit. Oct 12 23:59:10.231393 systemd-logind[1987]: Removed session 2. Oct 12 23:59:10.249799 systemd[1]: Started sshd@2-172.31.31.230:22-139.178.89.65:48454.service - OpenSSH per-connection server daemon (139.178.89.65:48454). Oct 12 23:59:10.451704 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 48454 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 12 23:59:10.454210 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 12 23:59:10.463483 systemd-logind[1987]: New session 3 of user core. Oct 12 23:59:10.473486 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 12 23:59:10.593216 sshd[2304]: Connection closed by 139.178.89.65 port 48454 Oct 12 23:59:10.594089 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Oct 12 23:59:10.601450 systemd[1]: sshd@2-172.31.31.230:22-139.178.89.65:48454.service: Deactivated successfully. Oct 12 23:59:10.606996 systemd[1]: session-3.scope: Deactivated successfully. Oct 12 23:59:10.610314 systemd-logind[1987]: Session 3 logged out. Waiting for processes to exit. Oct 12 23:59:10.613174 systemd-logind[1987]: Removed session 3. Oct 12 23:59:10.633170 systemd[1]: Started sshd@3-172.31.31.230:22-139.178.89.65:48460.service - OpenSSH per-connection server daemon (139.178.89.65:48460). Oct 12 23:59:10.827245 sshd[2310]: Accepted publickey for core from 139.178.89.65 port 48460 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 12 23:59:10.830233 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 12 23:59:10.839377 systemd-logind[1987]: New session 4 of user core. Oct 12 23:59:10.848543 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 12 23:59:10.976410 sshd[2313]: Connection closed by 139.178.89.65 port 48460 Oct 12 23:59:10.977242 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Oct 12 23:59:10.985257 systemd[1]: sshd@3-172.31.31.230:22-139.178.89.65:48460.service: Deactivated successfully. Oct 12 23:59:10.989527 systemd[1]: session-4.scope: Deactivated successfully. Oct 12 23:59:10.991787 systemd-logind[1987]: Session 4 logged out. Waiting for processes to exit. Oct 12 23:59:10.994974 systemd-logind[1987]: Removed session 4. Oct 12 23:59:11.011889 systemd[1]: Started sshd@4-172.31.31.230:22-139.178.89.65:48466.service - OpenSSH per-connection server daemon (139.178.89.65:48466). Oct 12 23:59:11.210542 sshd[2319]: Accepted publickey for core from 139.178.89.65 port 48466 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 12 23:59:11.213106 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 12 23:59:11.223709 systemd-logind[1987]: New session 5 of user core. Oct 12 23:59:11.231560 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 12 23:59:11.350955 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 12 23:59:11.352510 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 12 23:59:11.367515 sudo[2323]: pam_unix(sudo:session): session closed for user root Oct 12 23:59:11.391807 sshd[2322]: Connection closed by 139.178.89.65 port 48466 Oct 12 23:59:11.393100 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Oct 12 23:59:11.400100 systemd[1]: sshd@4-172.31.31.230:22-139.178.89.65:48466.service: Deactivated successfully. Oct 12 23:59:11.403856 systemd[1]: session-5.scope: Deactivated successfully. Oct 12 23:59:11.409029 systemd-logind[1987]: Session 5 logged out. Waiting for processes to exit. Oct 12 23:59:11.411810 systemd-logind[1987]: Removed session 5. Oct 12 23:59:11.431226 systemd[1]: Started sshd@5-172.31.31.230:22-139.178.89.65:48478.service - OpenSSH per-connection server daemon (139.178.89.65:48478). Oct 12 23:59:11.648939 sshd[2329]: Accepted publickey for core from 139.178.89.65 port 48478 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 12 23:59:11.651601 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 12 23:59:11.661276 systemd-logind[1987]: New session 6 of user core. Oct 12 23:59:11.669534 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 12 23:59:11.778004 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 12 23:59:11.778745 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 12 23:59:11.788763 sudo[2334]: pam_unix(sudo:session): session closed for user root Oct 12 23:59:11.799784 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 12 23:59:11.801233 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 12 23:59:11.819716 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 12 23:59:11.898447 augenrules[2356]: No rules Oct 12 23:59:11.901595 systemd[1]: audit-rules.service: Deactivated successfully. Oct 12 23:59:11.902153 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 12 23:59:11.904614 sudo[2333]: pam_unix(sudo:session): session closed for user root Oct 12 23:59:11.928986 sshd[2332]: Connection closed by 139.178.89.65 port 48478 Oct 12 23:59:11.929894 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Oct 12 23:59:11.938016 systemd[1]: sshd@5-172.31.31.230:22-139.178.89.65:48478.service: Deactivated successfully. Oct 12 23:59:11.941981 systemd[1]: session-6.scope: Deactivated successfully. Oct 12 23:59:11.944434 systemd-logind[1987]: Session 6 logged out. Waiting for processes to exit. Oct 12 23:59:11.948260 systemd-logind[1987]: Removed session 6. Oct 12 23:59:11.965135 systemd[1]: Started sshd@6-172.31.31.230:22-139.178.89.65:48482.service - OpenSSH per-connection server daemon (139.178.89.65:48482). Oct 12 23:59:12.161435 sshd[2365]: Accepted publickey for core from 139.178.89.65 port 48482 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 12 23:59:12.163769 sshd-session[2365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 12 23:59:12.173283 systemd-logind[1987]: New session 7 of user core. Oct 12 23:59:12.180549 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 12 23:59:12.284713 sudo[2369]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 12 23:59:12.285414 sudo[2369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 12 23:59:12.857680 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 12 23:59:12.888181 (dockerd)[2386]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 12 23:59:13.305221 dockerd[2386]: time="2025-10-12T23:59:13.305060393Z" level=info msg="Starting up" Oct 12 23:59:13.308719 dockerd[2386]: time="2025-10-12T23:59:13.308639285Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 12 23:59:13.329729 dockerd[2386]: time="2025-10-12T23:59:13.329660441Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 12 23:59:13.373817 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2437669122-merged.mount: Deactivated successfully. Oct 12 23:59:13.419000 dockerd[2386]: time="2025-10-12T23:59:13.418706705Z" level=info msg="Loading containers: start." Oct 12 23:59:13.433231 kernel: Initializing XFRM netlink socket Oct 12 23:59:13.157313 systemd-resolved[1899]: Clock change detected. Flushing caches. Oct 12 23:59:13.171594 systemd-journald[1523]: Time jumped backwards, rotating. Oct 12 23:59:13.370303 (udev-worker)[2408]: Network interface NamePolicy= disabled on kernel command line. Oct 12 23:59:13.458557 systemd-networkd[1898]: docker0: Link UP Oct 12 23:59:13.471942 dockerd[2386]: time="2025-10-12T23:59:13.471842505Z" level=info msg="Loading containers: done." Oct 12 23:59:13.502959 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4037136854-merged.mount: Deactivated successfully. Oct 12 23:59:13.505790 dockerd[2386]: time="2025-10-12T23:59:13.504928389Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 12 23:59:13.505790 dockerd[2386]: time="2025-10-12T23:59:13.505048809Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 12 23:59:13.505790 dockerd[2386]: time="2025-10-12T23:59:13.505208625Z" level=info msg="Initializing buildkit" Oct 12 23:59:13.564327 dockerd[2386]: time="2025-10-12T23:59:13.564263110Z" level=info msg="Completed buildkit initialization" Oct 12 23:59:13.583035 dockerd[2386]: time="2025-10-12T23:59:13.582953170Z" level=info msg="Daemon has completed initialization" Oct 12 23:59:13.583759 dockerd[2386]: time="2025-10-12T23:59:13.583513342Z" level=info msg="API listen on /run/docker.sock" Oct 12 23:59:13.584095 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 12 23:59:14.638493 containerd[2018]: time="2025-10-12T23:59:14.638416487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 12 23:59:15.302505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524054849.mount: Deactivated successfully. Oct 12 23:59:16.952813 containerd[2018]: time="2025-10-12T23:59:16.952018982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:16.954746 containerd[2018]: time="2025-10-12T23:59:16.954634394Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574510" Oct 12 23:59:16.957611 containerd[2018]: time="2025-10-12T23:59:16.957521354Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:16.965960 containerd[2018]: time="2025-10-12T23:59:16.965852570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:16.968443 containerd[2018]: time="2025-10-12T23:59:16.968162450Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 2.329675703s" Oct 12 23:59:16.968443 containerd[2018]: time="2025-10-12T23:59:16.968237270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 12 23:59:16.969354 containerd[2018]: time="2025-10-12T23:59:16.969291926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 12 23:59:18.618858 containerd[2018]: time="2025-10-12T23:59:18.618775971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:18.621441 containerd[2018]: time="2025-10-12T23:59:18.621367143Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132143" Oct 12 23:59:18.624045 containerd[2018]: time="2025-10-12T23:59:18.623959947Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:18.630598 containerd[2018]: time="2025-10-12T23:59:18.630501555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:18.632021 containerd[2018]: time="2025-10-12T23:59:18.631762179Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.662399297s" Oct 12 23:59:18.632021 containerd[2018]: time="2025-10-12T23:59:18.631823379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 12 23:59:18.632458 containerd[2018]: time="2025-10-12T23:59:18.632398911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 12 23:59:19.253301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 12 23:59:19.256434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 12 23:59:19.627979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:19.644362 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 12 23:59:19.739533 kubelet[2672]: E1012 23:59:19.739412 2672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 12 23:59:19.750541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 12 23:59:19.752589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 12 23:59:19.753260 systemd[1]: kubelet.service: Consumed 321ms CPU time, 107M memory peak. Oct 12 23:59:20.103325 containerd[2018]: time="2025-10-12T23:59:20.103271498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:20.105480 containerd[2018]: time="2025-10-12T23:59:20.105435230Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191884" Oct 12 23:59:20.106135 containerd[2018]: time="2025-10-12T23:59:20.106098542Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:20.110698 containerd[2018]: time="2025-10-12T23:59:20.110650514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:20.112705 containerd[2018]: time="2025-10-12T23:59:20.112661606Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 1.480203511s" Oct 12 23:59:20.112890 containerd[2018]: time="2025-10-12T23:59:20.112859750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 12 23:59:20.114233 containerd[2018]: time="2025-10-12T23:59:20.114185558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 12 23:59:21.326468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282043180.mount: Deactivated successfully. Oct 12 23:59:21.743223 containerd[2018]: time="2025-10-12T23:59:21.741993030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:21.743223 containerd[2018]: time="2025-10-12T23:59:21.743092410Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789028" Oct 12 23:59:21.744171 containerd[2018]: time="2025-10-12T23:59:21.744124002Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:21.748058 containerd[2018]: time="2025-10-12T23:59:21.747989790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:21.749326 containerd[2018]: time="2025-10-12T23:59:21.749283762Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.635044516s" Oct 12 23:59:21.749484 containerd[2018]: time="2025-10-12T23:59:21.749454270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 12 23:59:21.750637 containerd[2018]: time="2025-10-12T23:59:21.750559722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 12 23:59:22.273393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325073415.mount: Deactivated successfully. Oct 12 23:59:23.600488 containerd[2018]: time="2025-10-12T23:59:23.600378763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:23.603698 containerd[2018]: time="2025-10-12T23:59:23.603168211Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Oct 12 23:59:23.606064 containerd[2018]: time="2025-10-12T23:59:23.605987803Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:23.612504 containerd[2018]: time="2025-10-12T23:59:23.612425203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:23.614837 containerd[2018]: time="2025-10-12T23:59:23.614772319Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.864139193s" Oct 12 23:59:23.615030 containerd[2018]: time="2025-10-12T23:59:23.614990887Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 12 23:59:23.616579 containerd[2018]: time="2025-10-12T23:59:23.616521703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 12 23:59:24.154012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776646746.mount: Deactivated successfully. Oct 12 23:59:24.168289 containerd[2018]: time="2025-10-12T23:59:24.168194010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:24.170346 containerd[2018]: time="2025-10-12T23:59:24.170258646Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Oct 12 23:59:24.173183 containerd[2018]: time="2025-10-12T23:59:24.173082774Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:24.178326 containerd[2018]: time="2025-10-12T23:59:24.178208130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:24.180460 containerd[2018]: time="2025-10-12T23:59:24.180028794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 563.441751ms" Oct 12 23:59:24.180460 containerd[2018]: time="2025-10-12T23:59:24.180107958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 12 23:59:24.181129 containerd[2018]: time="2025-10-12T23:59:24.180854778Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 12 23:59:29.046034 containerd[2018]: time="2025-10-12T23:59:29.045960922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:29.048472 containerd[2018]: time="2025-10-12T23:59:29.048427486Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410766" Oct 12 23:59:29.050477 containerd[2018]: time="2025-10-12T23:59:29.050402710Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:29.058126 containerd[2018]: time="2025-10-12T23:59:29.058062514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:29.062227 containerd[2018]: time="2025-10-12T23:59:29.062120135Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.881205645s" Oct 12 23:59:29.062227 containerd[2018]: time="2025-10-12T23:59:29.062198711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 12 23:59:29.753284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 12 23:59:29.757859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 12 23:59:30.097963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:30.108177 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 12 23:59:30.185269 kubelet[2814]: E1012 23:59:30.185208 2814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 12 23:59:30.190901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 12 23:59:30.191311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 12 23:59:30.192345 systemd[1]: kubelet.service: Consumed 284ms CPU time, 106.7M memory peak. Oct 12 23:59:36.289259 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 12 23:59:37.012582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:37.013217 systemd[1]: kubelet.service: Consumed 284ms CPU time, 106.7M memory peak. Oct 12 23:59:37.017803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 12 23:59:37.077078 systemd[1]: Reload requested from client PID 2832 ('systemctl') (unit session-7.scope)... Oct 12 23:59:37.077272 systemd[1]: Reloading... Oct 12 23:59:37.353758 zram_generator::config[2878]: No configuration found. Oct 12 23:59:37.861597 systemd[1]: Reloading finished in 783 ms. Oct 12 23:59:37.942445 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 12 23:59:37.942659 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 12 23:59:37.944826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:37.944941 systemd[1]: kubelet.service: Consumed 242ms CPU time, 95M memory peak. Oct 12 23:59:37.948305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 12 23:59:38.294914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:38.311304 (kubelet)[2940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 12 23:59:38.393472 kubelet[2940]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 12 23:59:38.393472 kubelet[2940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 12 23:59:38.395299 kubelet[2940]: I1012 23:59:38.395190 2940 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 12 23:59:39.851218 kubelet[2940]: I1012 23:59:39.851017 2940 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 12 23:59:39.851218 kubelet[2940]: I1012 23:59:39.851063 2940 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 12 23:59:39.857550 kubelet[2940]: I1012 23:59:39.857501 2940 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 12 23:59:39.857990 kubelet[2940]: I1012 23:59:39.857663 2940 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 12 23:59:39.858849 kubelet[2940]: I1012 23:59:39.858818 2940 server.go:956] "Client rotation is on, will bootstrap in background" Oct 12 23:59:39.881213 kubelet[2940]: E1012 23:59:39.881144 2940 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 12 23:59:39.884051 kubelet[2940]: I1012 23:59:39.884003 2940 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 12 23:59:39.891885 kubelet[2940]: I1012 23:59:39.891841 2940 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 12 23:59:39.896951 kubelet[2940]: I1012 23:59:39.896911 2940 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 12 23:59:39.897384 kubelet[2940]: I1012 23:59:39.897332 2940 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 12 23:59:39.897681 kubelet[2940]: I1012 23:59:39.897385 2940 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 12 23:59:39.897681 kubelet[2940]: I1012 23:59:39.897670 2940 topology_manager.go:138] "Creating topology manager with none policy" Oct 12 23:59:39.897917 kubelet[2940]: I1012 23:59:39.897690 2940 container_manager_linux.go:306] "Creating device plugin manager" Oct 12 23:59:39.897917 kubelet[2940]: I1012 23:59:39.897883 2940 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 12 23:59:39.901986 kubelet[2940]: I1012 23:59:39.901946 2940 state_mem.go:36] "Initialized new in-memory state store" Oct 12 23:59:39.904381 kubelet[2940]: I1012 23:59:39.904335 2940 kubelet.go:475] "Attempting to sync node with API server" Oct 12 23:59:39.904381 kubelet[2940]: I1012 23:59:39.904376 2940 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 12 23:59:39.904529 kubelet[2940]: I1012 23:59:39.904424 2940 kubelet.go:387] "Adding apiserver pod source" Oct 12 23:59:39.904529 kubelet[2940]: I1012 23:59:39.904456 2940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 12 23:59:39.907748 kubelet[2940]: I1012 23:59:39.907437 2940 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 12 23:59:39.908567 kubelet[2940]: I1012 23:59:39.908525 2940 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 12 23:59:39.908652 kubelet[2940]: I1012 23:59:39.908585 2940 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 12 23:59:39.908731 kubelet[2940]: W1012 23:59:39.908650 2940 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 12 23:59:39.913080 kubelet[2940]: I1012 23:59:39.912934 2940 server.go:1262] "Started kubelet" Oct 12 23:59:39.914744 kubelet[2940]: E1012 23:59:39.913260 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 12 23:59:39.917349 kubelet[2940]: E1012 23:59:39.917308 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-230&limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 12 23:59:39.917757 kubelet[2940]: I1012 23:59:39.917702 2940 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 12 23:59:39.919010 kubelet[2940]: I1012 23:59:39.918915 2940 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 12 23:59:39.919143 kubelet[2940]: I1012 23:59:39.919018 2940 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 12 23:59:39.919517 kubelet[2940]: I1012 23:59:39.919492 2940 server.go:310] "Adding debug handlers to kubelet server" Oct 12 23:59:39.928044 kubelet[2940]: I1012 23:59:39.928005 2940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 12 23:59:39.928276 kubelet[2940]: I1012 23:59:39.928228 2940 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 12 23:59:39.932015 kubelet[2940]: E1012 23:59:39.929422 2940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.230:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-230.186de3dfd80115d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-230,UID:ip-172-31-31-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-230,},FirstTimestamp:2025-10-12 23:59:39.912893904 +0000 UTC m=+1.595531037,LastTimestamp:2025-10-12 23:59:39.912893904 +0000 UTC m=+1.595531037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-230,}" Oct 12 23:59:39.932837 kubelet[2940]: I1012 23:59:39.932780 2940 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 12 23:59:39.942392 kubelet[2940]: I1012 23:59:39.942341 2940 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 12 23:59:39.942629 kubelet[2940]: I1012 23:59:39.942594 2940 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 12 23:59:39.942812 kubelet[2940]: I1012 23:59:39.942691 2940 reconciler.go:29] "Reconciler: start to sync state" Oct 12 23:59:39.946010 kubelet[2940]: I1012 23:59:39.945947 2940 factory.go:223] Registration of the systemd container factory successfully Oct 12 23:59:39.946179 kubelet[2940]: I1012 23:59:39.946131 2940 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 12 23:59:39.948509 kubelet[2940]: E1012 23:59:39.948420 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-230?timeout=10s\": dial tcp 172.31.31.230:6443: connect: connection refused" interval="200ms" Oct 12 23:59:39.948762 kubelet[2940]: E1012 23:59:39.948650 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 12 23:59:39.949801 kubelet[2940]: E1012 23:59:39.949499 2940 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-31-230\" not found" Oct 12 23:59:39.951560 kubelet[2940]: E1012 23:59:39.951507 2940 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 12 23:59:39.951828 kubelet[2940]: I1012 23:59:39.951791 2940 factory.go:223] Registration of the containerd container factory successfully Oct 12 23:59:39.973666 kubelet[2940]: I1012 23:59:39.972396 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 12 23:59:39.974549 kubelet[2940]: I1012 23:59:39.974516 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 12 23:59:39.974671 kubelet[2940]: I1012 23:59:39.974654 2940 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 12 23:59:39.974847 kubelet[2940]: I1012 23:59:39.974828 2940 kubelet.go:2427] "Starting kubelet main sync loop" Oct 12 23:59:39.975044 kubelet[2940]: E1012 23:59:39.975003 2940 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 12 23:59:39.984297 kubelet[2940]: E1012 23:59:39.984251 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 12 23:59:39.995165 kubelet[2940]: I1012 23:59:39.995122 2940 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 12 23:59:39.995772 kubelet[2940]: I1012 23:59:39.995697 2940 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 12 23:59:39.995923 kubelet[2940]: I1012 23:59:39.995905 2940 state_mem.go:36] "Initialized new in-memory state store" Oct 12 23:59:39.998264 kubelet[2940]: I1012 23:59:39.998199 2940 policy_none.go:49] "None policy: Start" Oct 12 23:59:39.998264 kubelet[2940]: I1012 23:59:39.998239 2940 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 12 23:59:39.998264 kubelet[2940]: I1012 23:59:39.998263 2940 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 12 23:59:39.999900 kubelet[2940]: I1012 23:59:39.999846 2940 policy_none.go:47] "Start" Oct 12 23:59:40.009453 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 12 23:59:40.024324 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 12 23:59:40.032770 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 12 23:59:40.048632 kubelet[2940]: E1012 23:59:40.048559 2940 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 12 23:59:40.048954 kubelet[2940]: I1012 23:59:40.048871 2940 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 12 23:59:40.048954 kubelet[2940]: I1012 23:59:40.048904 2940 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 12 23:59:40.051744 kubelet[2940]: E1012 23:59:40.051652 2940 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 12 23:59:40.051883 kubelet[2940]: E1012 23:59:40.051767 2940 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-230\" not found" Oct 12 23:59:40.053620 kubelet[2940]: I1012 23:59:40.053578 2940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 12 23:59:40.098650 systemd[1]: Created slice kubepods-burstable-pod6356dcfff86e160a1829c2e05d87fe19.slice - libcontainer container kubepods-burstable-pod6356dcfff86e160a1829c2e05d87fe19.slice. Oct 12 23:59:40.110574 kubelet[2940]: E1012 23:59:40.110439 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:40.116685 systemd[1]: Created slice kubepods-burstable-pod7a257eeb93888e0337bc3bb5230c185b.slice - libcontainer container kubepods-burstable-pod7a257eeb93888e0337bc3bb5230c185b.slice. Oct 12 23:59:40.122295 kubelet[2940]: E1012 23:59:40.121963 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:40.143372 systemd[1]: Created slice kubepods-burstable-pod04d59a950d0a50c824fdb8ab28b4899a.slice - libcontainer container kubepods-burstable-pod04d59a950d0a50c824fdb8ab28b4899a.slice. Oct 12 23:59:40.145401 kubelet[2940]: I1012 23:59:40.145320 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:40.145503 kubelet[2940]: I1012 23:59:40.145410 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:40.145503 kubelet[2940]: I1012 23:59:40.145479 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:40.145658 kubelet[2940]: I1012 23:59:40.145518 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:40.145658 kubelet[2940]: I1012 23:59:40.145612 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04d59a950d0a50c824fdb8ab28b4899a-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-230\" (UID: \"04d59a950d0a50c824fdb8ab28b4899a\") " pod="kube-system/kube-scheduler-ip-172-31-31-230" Oct 12 23:59:40.145811 kubelet[2940]: I1012 23:59:40.145700 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6356dcfff86e160a1829c2e05d87fe19-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-230\" (UID: \"6356dcfff86e160a1829c2e05d87fe19\") " pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:40.145811 kubelet[2940]: I1012 23:59:40.145780 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:40.145911 kubelet[2940]: I1012 23:59:40.145841 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6356dcfff86e160a1829c2e05d87fe19-ca-certs\") pod \"kube-apiserver-ip-172-31-31-230\" (UID: \"6356dcfff86e160a1829c2e05d87fe19\") " pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:40.145911 kubelet[2940]: I1012 23:59:40.145881 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6356dcfff86e160a1829c2e05d87fe19-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-230\" (UID: \"6356dcfff86e160a1829c2e05d87fe19\") " pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:40.148750 kubelet[2940]: E1012 23:59:40.148321 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:40.149453 kubelet[2940]: E1012 23:59:40.149402 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-230?timeout=10s\": dial tcp 172.31.31.230:6443: connect: connection refused" interval="400ms" Oct 12 23:59:40.151822 kubelet[2940]: I1012 23:59:40.151262 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-230" Oct 12 23:59:40.151968 kubelet[2940]: E1012 23:59:40.151931 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.230:6443/api/v1/nodes\": dial tcp 172.31.31.230:6443: connect: connection refused" node="ip-172-31-31-230" Oct 12 23:59:40.354838 kubelet[2940]: I1012 23:59:40.354805 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-230" Oct 12 23:59:40.355589 kubelet[2940]: E1012 23:59:40.355549 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.230:6443/api/v1/nodes\": dial tcp 172.31.31.230:6443: connect: connection refused" node="ip-172-31-31-230" Oct 12 23:59:40.416530 containerd[2018]: time="2025-10-12T23:59:40.416383391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-230,Uid:6356dcfff86e160a1829c2e05d87fe19,Namespace:kube-system,Attempt:0,}" Oct 12 23:59:40.425005 containerd[2018]: time="2025-10-12T23:59:40.424948031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-230,Uid:7a257eeb93888e0337bc3bb5230c185b,Namespace:kube-system,Attempt:0,}" Oct 12 23:59:40.452167 containerd[2018]: time="2025-10-12T23:59:40.451831883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-230,Uid:04d59a950d0a50c824fdb8ab28b4899a,Namespace:kube-system,Attempt:0,}" Oct 12 23:59:40.551005 kubelet[2940]: E1012 23:59:40.550957 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-230?timeout=10s\": dial tcp 172.31.31.230:6443: connect: connection refused" interval="800ms" Oct 12 23:59:40.758550 kubelet[2940]: I1012 23:59:40.758503 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-230" Oct 12 23:59:40.759068 kubelet[2940]: E1012 23:59:40.759002 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.230:6443/api/v1/nodes\": dial tcp 172.31.31.230:6443: connect: connection refused" node="ip-172-31-31-230" Oct 12 23:59:40.886020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088073491.mount: Deactivated successfully. Oct 12 23:59:40.903736 containerd[2018]: time="2025-10-12T23:59:40.903648445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 12 23:59:40.912254 containerd[2018]: time="2025-10-12T23:59:40.912188809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Oct 12 23:59:40.914161 containerd[2018]: time="2025-10-12T23:59:40.914096221Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 12 23:59:40.917226 containerd[2018]: time="2025-10-12T23:59:40.916865137Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 12 23:59:40.920861 containerd[2018]: time="2025-10-12T23:59:40.920787601Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 12 23:59:40.923007 containerd[2018]: time="2025-10-12T23:59:40.922943785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 12 23:59:40.925220 containerd[2018]: time="2025-10-12T23:59:40.925047589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 12 23:59:40.927535 containerd[2018]: time="2025-10-12T23:59:40.927474961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 12 23:59:40.928919 containerd[2018]: time="2025-10-12T23:59:40.928878049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 509.027402ms" Oct 12 23:59:40.932790 containerd[2018]: time="2025-10-12T23:59:40.932700157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 505.285994ms" Oct 12 23:59:40.946694 containerd[2018]: time="2025-10-12T23:59:40.946612202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 490.223919ms" Oct 12 23:59:40.979773 containerd[2018]: time="2025-10-12T23:59:40.979673450Z" level=info msg="connecting to shim bf8ba77918e92bc0a9a35aaa25c65c48a7ca2f56683ae4c6ebba89660b47898c" address="unix:///run/containerd/s/1bba0708b119f2d74334d6a3306fd35e10f7bd5c15539de0cee02ee223d39f32" namespace=k8s.io protocol=ttrpc version=3 Oct 12 23:59:41.016036 kubelet[2940]: E1012 23:59:41.015249 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 12 23:59:41.024777 containerd[2018]: time="2025-10-12T23:59:41.024282130Z" level=info msg="connecting to shim 2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758" address="unix:///run/containerd/s/f65d4ab65d1fa49a876270bbbe02b40847a64046de6fc1b5208b6c7781a37e67" namespace=k8s.io protocol=ttrpc version=3 Oct 12 23:59:41.037985 containerd[2018]: time="2025-10-12T23:59:41.037757782Z" level=info msg="connecting to shim c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75" address="unix:///run/containerd/s/85944eda903bbdb332b8e24efa31961ab743381fdebb515e700ca97415960202" namespace=k8s.io protocol=ttrpc version=3 Oct 12 23:59:41.042320 kubelet[2940]: E1012 23:59:41.042254 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 12 23:59:41.051089 systemd[1]: Started cri-containerd-bf8ba77918e92bc0a9a35aaa25c65c48a7ca2f56683ae4c6ebba89660b47898c.scope - libcontainer container bf8ba77918e92bc0a9a35aaa25c65c48a7ca2f56683ae4c6ebba89660b47898c. Oct 12 23:59:41.105339 systemd[1]: Started cri-containerd-2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758.scope - libcontainer container 2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758. Oct 12 23:59:41.132239 systemd[1]: Started cri-containerd-c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75.scope - libcontainer container c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75. Oct 12 23:59:41.194700 containerd[2018]: time="2025-10-12T23:59:41.194485223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-230,Uid:6356dcfff86e160a1829c2e05d87fe19,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf8ba77918e92bc0a9a35aaa25c65c48a7ca2f56683ae4c6ebba89660b47898c\"" Oct 12 23:59:41.212634 containerd[2018]: time="2025-10-12T23:59:41.212437115Z" level=info msg="CreateContainer within sandbox \"bf8ba77918e92bc0a9a35aaa25c65c48a7ca2f56683ae4c6ebba89660b47898c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 12 23:59:41.242459 containerd[2018]: time="2025-10-12T23:59:41.242390291Z" level=info msg="Container b635c5f52e92dd26a28a9802668d5c4760655a2176e7d935294722184e0a8221: CDI devices from CRI Config.CDIDevices: []" Oct 12 23:59:41.277279 containerd[2018]: time="2025-10-12T23:59:41.276352091Z" level=info msg="CreateContainer within sandbox \"bf8ba77918e92bc0a9a35aaa25c65c48a7ca2f56683ae4c6ebba89660b47898c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b635c5f52e92dd26a28a9802668d5c4760655a2176e7d935294722184e0a8221\"" Oct 12 23:59:41.278950 containerd[2018]: time="2025-10-12T23:59:41.278891303Z" level=info msg="StartContainer for \"b635c5f52e92dd26a28a9802668d5c4760655a2176e7d935294722184e0a8221\"" Oct 12 23:59:41.280015 containerd[2018]: time="2025-10-12T23:59:41.279846503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-230,Uid:7a257eeb93888e0337bc3bb5230c185b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758\"" Oct 12 23:59:41.281753 containerd[2018]: time="2025-10-12T23:59:41.281595671Z" level=info msg="connecting to shim b635c5f52e92dd26a28a9802668d5c4760655a2176e7d935294722184e0a8221" address="unix:///run/containerd/s/1bba0708b119f2d74334d6a3306fd35e10f7bd5c15539de0cee02ee223d39f32" protocol=ttrpc version=3 Oct 12 23:59:41.284276 containerd[2018]: time="2025-10-12T23:59:41.283527719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-230,Uid:04d59a950d0a50c824fdb8ab28b4899a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75\"" Oct 12 23:59:41.293207 containerd[2018]: time="2025-10-12T23:59:41.293158403Z" level=info msg="CreateContainer within sandbox \"2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 12 23:59:41.298534 containerd[2018]: time="2025-10-12T23:59:41.298426523Z" level=info msg="CreateContainer within sandbox \"c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 12 23:59:41.315184 kubelet[2940]: E1012 23:59:41.315124 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 12 23:59:41.321987 containerd[2018]: time="2025-10-12T23:59:41.321916595Z" level=info msg="Container 4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf: CDI devices from CRI Config.CDIDevices: []" Oct 12 23:59:41.329877 containerd[2018]: time="2025-10-12T23:59:41.329807951Z" level=info msg="Container a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1: CDI devices from CRI Config.CDIDevices: []" Oct 12 23:59:41.330286 systemd[1]: Started cri-containerd-b635c5f52e92dd26a28a9802668d5c4760655a2176e7d935294722184e0a8221.scope - libcontainer container b635c5f52e92dd26a28a9802668d5c4760655a2176e7d935294722184e0a8221. Oct 12 23:59:41.351286 containerd[2018]: time="2025-10-12T23:59:41.351215424Z" level=info msg="CreateContainer within sandbox \"2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf\"" Oct 12 23:59:41.352266 kubelet[2940]: E1012 23:59:41.352013 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-230?timeout=10s\": dial tcp 172.31.31.230:6443: connect: connection refused" interval="1.6s" Oct 12 23:59:41.352586 containerd[2018]: time="2025-10-12T23:59:41.352520400Z" level=info msg="CreateContainer within sandbox \"c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1\"" Oct 12 23:59:41.353411 containerd[2018]: time="2025-10-12T23:59:41.353357520Z" level=info msg="StartContainer for \"4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf\"" Oct 12 23:59:41.353601 containerd[2018]: time="2025-10-12T23:59:41.353379624Z" level=info msg="StartContainer for \"a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1\"" Oct 12 23:59:41.355501 containerd[2018]: time="2025-10-12T23:59:41.355426632Z" level=info msg="connecting to shim a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1" address="unix:///run/containerd/s/85944eda903bbdb332b8e24efa31961ab743381fdebb515e700ca97415960202" protocol=ttrpc version=3 Oct 12 23:59:41.356468 containerd[2018]: time="2025-10-12T23:59:41.356408232Z" level=info msg="connecting to shim 4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf" address="unix:///run/containerd/s/f65d4ab65d1fa49a876270bbbe02b40847a64046de6fc1b5208b6c7781a37e67" protocol=ttrpc version=3 Oct 12 23:59:41.407041 systemd[1]: Started cri-containerd-4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf.scope - libcontainer container 4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf. Oct 12 23:59:41.428022 systemd[1]: Started cri-containerd-a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1.scope - libcontainer container a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1. Oct 12 23:59:41.481570 kubelet[2940]: E1012 23:59:41.481489 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-230&limit=500&resourceVersion=0\": dial tcp 172.31.31.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 12 23:59:41.505226 containerd[2018]: time="2025-10-12T23:59:41.505161948Z" level=info msg="StartContainer for \"b635c5f52e92dd26a28a9802668d5c4760655a2176e7d935294722184e0a8221\" returns successfully" Oct 12 23:59:41.566197 kubelet[2940]: I1012 23:59:41.563436 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-230" Oct 12 23:59:41.566197 kubelet[2940]: E1012 23:59:41.563963 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.230:6443/api/v1/nodes\": dial tcp 172.31.31.230:6443: connect: connection refused" node="ip-172-31-31-230" Oct 12 23:59:41.583753 containerd[2018]: time="2025-10-12T23:59:41.583654105Z" level=info msg="StartContainer for \"4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf\" returns successfully" Oct 12 23:59:41.597695 containerd[2018]: time="2025-10-12T23:59:41.597614941Z" level=info msg="StartContainer for \"a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1\" returns successfully" Oct 12 23:59:42.009020 kubelet[2940]: E1012 23:59:42.008185 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:42.012360 kubelet[2940]: E1012 23:59:42.011575 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:42.021200 kubelet[2940]: E1012 23:59:42.021154 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:43.023963 kubelet[2940]: E1012 23:59:43.022875 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:43.025740 kubelet[2940]: E1012 23:59:43.022891 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:43.025740 kubelet[2940]: E1012 23:59:43.025100 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:43.169894 kubelet[2940]: I1012 23:59:43.169618 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-230" Oct 12 23:59:45.761398 kubelet[2940]: E1012 23:59:45.761343 2940 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-230\" not found" node="ip-172-31-31-230" Oct 12 23:59:45.887374 kubelet[2940]: I1012 23:59:45.887312 2940 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-230" Oct 12 23:59:45.910251 kubelet[2940]: I1012 23:59:45.910197 2940 apiserver.go:52] "Watching apiserver" Oct 12 23:59:45.921766 kubelet[2940]: E1012 23:59:45.920364 2940 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-230.186de3dfd80115d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-230,UID:ip-172-31-31-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-230,},FirstTimestamp:2025-10-12 23:59:39.912893904 +0000 UTC m=+1.595531037,LastTimestamp:2025-10-12 23:59:39.912893904 +0000 UTC m=+1.595531037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-230,}" Oct 12 23:59:45.943173 kubelet[2940]: I1012 23:59:45.943129 2940 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 12 23:59:45.944758 kubelet[2940]: I1012 23:59:45.944446 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:45.966623 kubelet[2940]: E1012 23:59:45.966582 2940 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:45.966902 kubelet[2940]: I1012 23:59:45.966834 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-230" Oct 12 23:59:45.973253 kubelet[2940]: E1012 23:59:45.973195 2940 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-230" Oct 12 23:59:45.973585 kubelet[2940]: I1012 23:59:45.973420 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:45.983121 kubelet[2940]: E1012 23:59:45.983057 2940 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:46.398466 kubelet[2940]: I1012 23:59:46.398385 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-230" Oct 12 23:59:47.086222 kubelet[2940]: I1012 23:59:47.085769 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:47.795206 systemd[1]: Reload requested from client PID 3226 ('systemctl') (unit session-7.scope)... Oct 12 23:59:47.795237 systemd[1]: Reloading... Oct 12 23:59:47.951774 zram_generator::config[3269]: No configuration found. Oct 12 23:59:48.552248 systemd[1]: Reloading finished in 756 ms. Oct 12 23:59:48.618066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 12 23:59:48.625402 systemd[1]: kubelet.service: Deactivated successfully. Oct 12 23:59:48.625985 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:48.626067 systemd[1]: kubelet.service: Consumed 2.354s CPU time, 121.1M memory peak. Oct 12 23:59:48.630652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 12 23:59:49.012815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 12 23:59:49.036438 (kubelet)[3330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 12 23:59:49.163040 kubelet[3330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 12 23:59:49.163040 kubelet[3330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 12 23:59:49.163040 kubelet[3330]: I1012 23:59:49.162794 3330 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 12 23:59:49.188245 kubelet[3330]: I1012 23:59:49.188016 3330 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 12 23:59:49.188245 kubelet[3330]: I1012 23:59:49.188066 3330 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 12 23:59:49.188245 kubelet[3330]: I1012 23:59:49.188123 3330 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 12 23:59:49.188245 kubelet[3330]: I1012 23:59:49.188137 3330 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 12 23:59:49.188580 kubelet[3330]: I1012 23:59:49.188539 3330 server.go:956] "Client rotation is on, will bootstrap in background" Oct 12 23:59:49.191062 kubelet[3330]: I1012 23:59:49.191008 3330 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 12 23:59:49.196636 kubelet[3330]: I1012 23:59:49.195947 3330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 12 23:59:49.206407 kubelet[3330]: I1012 23:59:49.206342 3330 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 12 23:59:49.216439 kubelet[3330]: I1012 23:59:49.216392 3330 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 12 23:59:49.217737 kubelet[3330]: I1012 23:59:49.217649 3330 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 12 23:59:49.218401 kubelet[3330]: I1012 23:59:49.217965 3330 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 12 23:59:49.218706 kubelet[3330]: I1012 23:59:49.218680 3330 topology_manager.go:138] "Creating topology manager with none policy" Oct 12 23:59:49.218912 kubelet[3330]: I1012 23:59:49.218862 3330 container_manager_linux.go:306] "Creating device plugin manager" Oct 12 23:59:49.219142 kubelet[3330]: I1012 23:59:49.219120 3330 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 12 23:59:49.221685 kubelet[3330]: I1012 23:59:49.221591 3330 state_mem.go:36] "Initialized new in-memory state store" Oct 12 23:59:49.222033 kubelet[3330]: I1012 23:59:49.222013 3330 kubelet.go:475] "Attempting to sync node with API server" Oct 12 23:59:49.222943 kubelet[3330]: I1012 23:59:49.222882 3330 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 12 23:59:49.223230 kubelet[3330]: I1012 23:59:49.223084 3330 kubelet.go:387] "Adding apiserver pod source" Oct 12 23:59:49.223440 kubelet[3330]: I1012 23:59:49.223335 3330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 12 23:59:49.229501 kubelet[3330]: I1012 23:59:49.229421 3330 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 12 23:59:49.233593 kubelet[3330]: I1012 23:59:49.233550 3330 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 12 23:59:49.235097 kubelet[3330]: I1012 23:59:49.234801 3330 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 12 23:59:49.253893 kubelet[3330]: I1012 23:59:49.253860 3330 server.go:1262] "Started kubelet" Oct 12 23:59:49.269834 kubelet[3330]: I1012 23:59:49.267900 3330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 12 23:59:49.274635 kubelet[3330]: I1012 23:59:49.274457 3330 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 12 23:59:49.278064 kubelet[3330]: I1012 23:59:49.277991 3330 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 12 23:59:49.300395 kubelet[3330]: I1012 23:59:49.298873 3330 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 12 23:59:49.300395 kubelet[3330]: I1012 23:59:49.300117 3330 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 12 23:59:49.300395 kubelet[3330]: I1012 23:59:49.293981 3330 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 12 23:59:49.300395 kubelet[3330]: E1012 23:59:49.294201 3330 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-31-230\" not found" Oct 12 23:59:49.300395 kubelet[3330]: I1012 23:59:49.298779 3330 server.go:310] "Adding debug handlers to kubelet server" Oct 12 23:59:49.325485 kubelet[3330]: I1012 23:59:49.283679 3330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 12 23:59:49.325485 kubelet[3330]: I1012 23:59:49.293962 3330 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 12 23:59:49.331743 kubelet[3330]: I1012 23:59:49.330181 3330 reconciler.go:29] "Reconciler: start to sync state" Oct 12 23:59:49.335008 kubelet[3330]: I1012 23:59:49.334950 3330 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 12 23:59:49.354849 kubelet[3330]: I1012 23:59:49.354805 3330 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 12 23:59:49.355762 kubelet[3330]: E1012 23:59:49.355470 3330 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 12 23:59:49.357394 kubelet[3330]: I1012 23:59:49.357346 3330 factory.go:223] Registration of the containerd container factory successfully Oct 12 23:59:49.362391 kubelet[3330]: I1012 23:59:49.362356 3330 factory.go:223] Registration of the systemd container factory successfully Oct 12 23:59:49.400749 kubelet[3330]: E1012 23:59:49.400668 3330 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-31-230\" not found" Oct 12 23:59:49.434573 kubelet[3330]: I1012 23:59:49.433906 3330 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 12 23:59:49.434573 kubelet[3330]: I1012 23:59:49.434662 3330 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 12 23:59:49.435688 kubelet[3330]: I1012 23:59:49.435662 3330 kubelet.go:2427] "Starting kubelet main sync loop" Oct 12 23:59:49.437171 kubelet[3330]: E1012 23:59:49.436981 3330 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 12 23:59:49.537968 kubelet[3330]: E1012 23:59:49.537091 3330 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 12 23:59:49.544251 kubelet[3330]: I1012 23:59:49.544193 3330 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 12 23:59:49.545726 kubelet[3330]: I1012 23:59:49.545659 3330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.545973 3330 state_mem.go:36] "Initialized new in-memory state store" Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.546207 3330 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.546243 3330 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.546275 3330 policy_none.go:49] "None policy: Start" Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.546294 3330 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.546315 3330 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.546496 3330 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 12 23:59:49.546781 kubelet[3330]: I1012 23:59:49.546516 3330 policy_none.go:47] "Start" Oct 12 23:59:49.563876 kubelet[3330]: E1012 23:59:49.563838 3330 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 12 23:59:49.565504 kubelet[3330]: I1012 23:59:49.565473 3330 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 12 23:59:49.566565 kubelet[3330]: I1012 23:59:49.565998 3330 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 12 23:59:49.570843 kubelet[3330]: I1012 23:59:49.570739 3330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 12 23:59:49.574857 kubelet[3330]: E1012 23:59:49.574302 3330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 12 23:59:49.687264 kubelet[3330]: I1012 23:59:49.687186 3330 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-230" Oct 12 23:59:49.703820 kubelet[3330]: I1012 23:59:49.703361 3330 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-230" Oct 12 23:59:49.703820 kubelet[3330]: I1012 23:59:49.703472 3330 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-230" Oct 12 23:59:49.738896 kubelet[3330]: I1012 23:59:49.738859 3330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:49.740649 kubelet[3330]: I1012 23:59:49.739004 3330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-230" Oct 12 23:59:49.740890 kubelet[3330]: I1012 23:59:49.740846 3330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:49.753783 kubelet[3330]: E1012 23:59:49.752880 3330 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-230\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:49.760383 kubelet[3330]: E1012 23:59:49.759700 3330 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-230\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-230" Oct 12 23:59:49.835822 kubelet[3330]: I1012 23:59:49.835641 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:49.836428 kubelet[3330]: I1012 23:59:49.835705 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:49.836705 kubelet[3330]: I1012 23:59:49.836585 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:49.837424 kubelet[3330]: I1012 23:59:49.837223 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:49.838380 kubelet[3330]: I1012 23:59:49.838006 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a257eeb93888e0337bc3bb5230c185b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-230\" (UID: \"7a257eeb93888e0337bc3bb5230c185b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-230" Oct 12 23:59:49.838688 kubelet[3330]: I1012 23:59:49.838639 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04d59a950d0a50c824fdb8ab28b4899a-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-230\" (UID: \"04d59a950d0a50c824fdb8ab28b4899a\") " pod="kube-system/kube-scheduler-ip-172-31-31-230" Oct 12 23:59:49.838979 kubelet[3330]: I1012 23:59:49.838926 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6356dcfff86e160a1829c2e05d87fe19-ca-certs\") pod \"kube-apiserver-ip-172-31-31-230\" (UID: \"6356dcfff86e160a1829c2e05d87fe19\") " pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:49.839136 kubelet[3330]: I1012 23:59:49.839101 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6356dcfff86e160a1829c2e05d87fe19-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-230\" (UID: \"6356dcfff86e160a1829c2e05d87fe19\") " pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:49.839267 kubelet[3330]: I1012 23:59:49.839235 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6356dcfff86e160a1829c2e05d87fe19-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-230\" (UID: \"6356dcfff86e160a1829c2e05d87fe19\") " pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:50.226447 kubelet[3330]: I1012 23:59:50.226030 3330 apiserver.go:52] "Watching apiserver" Oct 12 23:59:50.300665 kubelet[3330]: I1012 23:59:50.300608 3330 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 12 23:59:50.489866 kubelet[3330]: I1012 23:59:50.489468 3330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:50.515950 kubelet[3330]: E1012 23:59:50.515871 3330 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-230\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-230" Oct 12 23:59:50.534563 update_engine[1988]: I20251012 23:59:50.533764 1988 update_attempter.cc:509] Updating boot flags... Oct 12 23:59:50.709922 kubelet[3330]: I1012 23:59:50.709079 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-230" podStartSLOduration=3.709058338 podStartE2EDuration="3.709058338s" podCreationTimestamp="2025-10-12 23:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-12 23:59:50.634805998 +0000 UTC m=+1.586096817" watchObservedRunningTime="2025-10-12 23:59:50.709058338 +0000 UTC m=+1.660349121" Oct 12 23:59:50.772763 kubelet[3330]: I1012 23:59:50.772660 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-230" podStartSLOduration=4.772638394 podStartE2EDuration="4.772638394s" podCreationTimestamp="2025-10-12 23:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-12 23:59:50.712948966 +0000 UTC m=+1.664239749" watchObservedRunningTime="2025-10-12 23:59:50.772638394 +0000 UTC m=+1.723929189" Oct 12 23:59:50.857458 kubelet[3330]: I1012 23:59:50.857385 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-230" podStartSLOduration=1.857362859 podStartE2EDuration="1.857362859s" podCreationTimestamp="2025-10-12 23:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-12 23:59:50.795435178 +0000 UTC m=+1.746726009" watchObservedRunningTime="2025-10-12 23:59:50.857362859 +0000 UTC m=+1.808653666" Oct 12 23:59:53.223825 kubelet[3330]: I1012 23:59:53.223577 3330 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 12 23:59:53.225107 containerd[2018]: time="2025-10-12T23:59:53.224973803Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 12 23:59:53.226176 kubelet[3330]: I1012 23:59:53.226122 3330 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 12 23:59:54.218848 systemd[1]: Created slice kubepods-besteffort-podb149a9ca_dbf8_48d3_bcbd_2d8d09b41abc.slice - libcontainer container kubepods-besteffort-podb149a9ca_dbf8_48d3_bcbd_2d8d09b41abc.slice. Oct 12 23:59:54.272846 kubelet[3330]: I1012 23:59:54.272792 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxfv\" (UniqueName: \"kubernetes.io/projected/b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc-kube-api-access-slxfv\") pod \"kube-proxy-c5bfg\" (UID: \"b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc\") " pod="kube-system/kube-proxy-c5bfg" Oct 12 23:59:54.273639 kubelet[3330]: I1012 23:59:54.273593 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc-kube-proxy\") pod \"kube-proxy-c5bfg\" (UID: \"b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc\") " pod="kube-system/kube-proxy-c5bfg" Oct 12 23:59:54.273978 kubelet[3330]: I1012 23:59:54.273861 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc-xtables-lock\") pod \"kube-proxy-c5bfg\" (UID: \"b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc\") " pod="kube-system/kube-proxy-c5bfg" Oct 12 23:59:54.273978 kubelet[3330]: I1012 23:59:54.273931 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc-lib-modules\") pod \"kube-proxy-c5bfg\" (UID: \"b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc\") " pod="kube-system/kube-proxy-c5bfg" Oct 12 23:59:54.378279 kubelet[3330]: E1012 23:59:54.376218 3330 status_manager.go:1018] "Failed to get status for pod" err="pods \"tigera-operator-db78d5bd4-cms8q\" is forbidden: User \"system:node:ip-172-31-31-230\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-31-230' and this object" podUID="c33a75ad-7417-4481-80b7-72f12b1520d3" pod="tigera-operator/tigera-operator-db78d5bd4-cms8q" Oct 12 23:59:54.378279 kubelet[3330]: E1012 23:59:54.376967 3330 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ip-172-31-31-230\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-31-230' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kubernetes-services-endpoint\"" type="*v1.ConfigMap" Oct 12 23:59:54.378279 kubelet[3330]: E1012 23:59:54.377765 3330 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-31-230\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-31-230' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Oct 12 23:59:54.384272 systemd[1]: Created slice kubepods-besteffort-podc33a75ad_7417_4481_80b7_72f12b1520d3.slice - libcontainer container kubepods-besteffort-podc33a75ad_7417_4481_80b7_72f12b1520d3.slice. Oct 12 23:59:54.476111 kubelet[3330]: I1012 23:59:54.475421 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4vcn\" (UniqueName: \"kubernetes.io/projected/c33a75ad-7417-4481-80b7-72f12b1520d3-kube-api-access-f4vcn\") pod \"tigera-operator-db78d5bd4-cms8q\" (UID: \"c33a75ad-7417-4481-80b7-72f12b1520d3\") " pod="tigera-operator/tigera-operator-db78d5bd4-cms8q" Oct 12 23:59:54.476111 kubelet[3330]: I1012 23:59:54.475500 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c33a75ad-7417-4481-80b7-72f12b1520d3-var-lib-calico\") pod \"tigera-operator-db78d5bd4-cms8q\" (UID: \"c33a75ad-7417-4481-80b7-72f12b1520d3\") " pod="tigera-operator/tigera-operator-db78d5bd4-cms8q" Oct 12 23:59:54.537698 containerd[2018]: time="2025-10-12T23:59:54.537603241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c5bfg,Uid:b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc,Namespace:kube-system,Attempt:0,}" Oct 12 23:59:54.583363 containerd[2018]: time="2025-10-12T23:59:54.583167601Z" level=info msg="connecting to shim 5671a81e54709157ffe261aed8a203c56bd7bb885f2ea7c057dcc031bd215d67" address="unix:///run/containerd/s/6a4abeb30556b43caf96c0037752a06f9bf3d809d8cf264fa8a3e331158541c6" namespace=k8s.io protocol=ttrpc version=3 Oct 12 23:59:54.636056 systemd[1]: Started cri-containerd-5671a81e54709157ffe261aed8a203c56bd7bb885f2ea7c057dcc031bd215d67.scope - libcontainer container 5671a81e54709157ffe261aed8a203c56bd7bb885f2ea7c057dcc031bd215d67. Oct 12 23:59:54.687036 containerd[2018]: time="2025-10-12T23:59:54.686977910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c5bfg,Uid:b149a9ca-dbf8-48d3-bcbd-2d8d09b41abc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5671a81e54709157ffe261aed8a203c56bd7bb885f2ea7c057dcc031bd215d67\"" Oct 12 23:59:54.696840 containerd[2018]: time="2025-10-12T23:59:54.696782906Z" level=info msg="CreateContainer within sandbox \"5671a81e54709157ffe261aed8a203c56bd7bb885f2ea7c057dcc031bd215d67\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 12 23:59:54.727618 containerd[2018]: time="2025-10-12T23:59:54.725911262Z" level=info msg="Container f81b744c8a4a86ed2b53969124fc4d92c783511e5b35dea2e9115d0c283f27f5: CDI devices from CRI Config.CDIDevices: []" Oct 12 23:59:54.726624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43676575.mount: Deactivated successfully. Oct 12 23:59:54.745013 containerd[2018]: time="2025-10-12T23:59:54.744934766Z" level=info msg="CreateContainer within sandbox \"5671a81e54709157ffe261aed8a203c56bd7bb885f2ea7c057dcc031bd215d67\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f81b744c8a4a86ed2b53969124fc4d92c783511e5b35dea2e9115d0c283f27f5\"" Oct 12 23:59:54.747794 containerd[2018]: time="2025-10-12T23:59:54.747514562Z" level=info msg="StartContainer for \"f81b744c8a4a86ed2b53969124fc4d92c783511e5b35dea2e9115d0c283f27f5\"" Oct 12 23:59:54.752351 containerd[2018]: time="2025-10-12T23:59:54.752275970Z" level=info msg="connecting to shim f81b744c8a4a86ed2b53969124fc4d92c783511e5b35dea2e9115d0c283f27f5" address="unix:///run/containerd/s/6a4abeb30556b43caf96c0037752a06f9bf3d809d8cf264fa8a3e331158541c6" protocol=ttrpc version=3 Oct 12 23:59:54.790042 systemd[1]: Started cri-containerd-f81b744c8a4a86ed2b53969124fc4d92c783511e5b35dea2e9115d0c283f27f5.scope - libcontainer container f81b744c8a4a86ed2b53969124fc4d92c783511e5b35dea2e9115d0c283f27f5. Oct 12 23:59:54.876423 containerd[2018]: time="2025-10-12T23:59:54.876255999Z" level=info msg="StartContainer for \"f81b744c8a4a86ed2b53969124fc4d92c783511e5b35dea2e9115d0c283f27f5\" returns successfully" Oct 12 23:59:55.596237 containerd[2018]: time="2025-10-12T23:59:55.596175794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-cms8q,Uid:c33a75ad-7417-4481-80b7-72f12b1520d3,Namespace:tigera-operator,Attempt:0,}" Oct 12 23:59:55.654675 containerd[2018]: time="2025-10-12T23:59:55.654607491Z" level=info msg="connecting to shim 5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754" address="unix:///run/containerd/s/377494bad29d4119778f6961854408a7a39173a7d3c1a22c92956992a07f7c7f" namespace=k8s.io protocol=ttrpc version=3 Oct 12 23:59:55.714387 systemd[1]: Started cri-containerd-5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754.scope - libcontainer container 5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754. Oct 12 23:59:55.809749 containerd[2018]: time="2025-10-12T23:59:55.809509323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-cms8q,Uid:c33a75ad-7417-4481-80b7-72f12b1520d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754\"" Oct 12 23:59:55.820527 containerd[2018]: time="2025-10-12T23:59:55.820386375Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 12 23:59:56.022697 kubelet[3330]: I1012 23:59:56.022594 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c5bfg" podStartSLOduration=2.022571064 podStartE2EDuration="2.022571064s" podCreationTimestamp="2025-10-12 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-12 23:59:55.524265794 +0000 UTC m=+6.475556613" watchObservedRunningTime="2025-10-12 23:59:56.022571064 +0000 UTC m=+6.973861859" Oct 12 23:59:57.132231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730988705.mount: Deactivated successfully. Oct 12 23:59:57.867307 containerd[2018]: time="2025-10-12T23:59:57.867251742Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:57.868849 containerd[2018]: time="2025-10-12T23:59:57.868776486Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Oct 12 23:59:57.869634 containerd[2018]: time="2025-10-12T23:59:57.869538726Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:57.874505 containerd[2018]: time="2025-10-12T23:59:57.874419546Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 12 23:59:57.876047 containerd[2018]: time="2025-10-12T23:59:57.875873034Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.055421319s" Oct 12 23:59:57.876047 containerd[2018]: time="2025-10-12T23:59:57.875922570Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Oct 12 23:59:57.884240 containerd[2018]: time="2025-10-12T23:59:57.883688526Z" level=info msg="CreateContainer within sandbox \"5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 12 23:59:57.903758 containerd[2018]: time="2025-10-12T23:59:57.900694110Z" level=info msg="Container ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2: CDI devices from CRI Config.CDIDevices: []" Oct 12 23:59:57.905738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205230800.mount: Deactivated successfully. Oct 12 23:59:57.921171 containerd[2018]: time="2025-10-12T23:59:57.921098322Z" level=info msg="CreateContainer within sandbox \"5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\"" Oct 12 23:59:57.922951 containerd[2018]: time="2025-10-12T23:59:57.922884870Z" level=info msg="StartContainer for \"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\"" Oct 12 23:59:57.925232 containerd[2018]: time="2025-10-12T23:59:57.925181166Z" level=info msg="connecting to shim ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2" address="unix:///run/containerd/s/377494bad29d4119778f6961854408a7a39173a7d3c1a22c92956992a07f7c7f" protocol=ttrpc version=3 Oct 12 23:59:57.984056 systemd[1]: Started cri-containerd-ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2.scope - libcontainer container ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2. Oct 12 23:59:58.045482 containerd[2018]: time="2025-10-12T23:59:58.045436862Z" level=info msg="StartContainer for \"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\" returns successfully" Oct 12 23:59:58.878202 kubelet[3330]: I1012 23:59:58.878075 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-cms8q" podStartSLOduration=2.816881824 podStartE2EDuration="4.878053219s" podCreationTimestamp="2025-10-12 23:59:54 +0000 UTC" firstStartedPulling="2025-10-12 23:59:55.816629523 +0000 UTC m=+6.767920306" lastFinishedPulling="2025-10-12 23:59:57.877800918 +0000 UTC m=+8.829091701" observedRunningTime="2025-10-12 23:59:58.542470277 +0000 UTC m=+9.493761096" watchObservedRunningTime="2025-10-12 23:59:58.878053219 +0000 UTC m=+9.829344002" Oct 13 00:00:06.896460 sudo[2369]: pam_unix(sudo:session): session closed for user root Oct 13 00:00:06.919853 sshd[2368]: Connection closed by 139.178.89.65 port 48482 Oct 13 00:00:06.922020 sshd-session[2365]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:06.934330 systemd[1]: sshd@6-172.31.31.230:22-139.178.89.65:48482.service: Deactivated successfully. Oct 13 00:00:06.942560 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 00:00:06.944586 systemd[1]: session-7.scope: Consumed 11.753s CPU time, 225M memory peak. Oct 13 00:00:06.954097 systemd-logind[1987]: Session 7 logged out. Waiting for processes to exit. Oct 13 00:00:06.962538 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Oct 13 00:00:06.964778 systemd-logind[1987]: Removed session 7. Oct 13 00:00:06.992223 systemd[1]: mdadm.service: Deactivated successfully. Oct 13 00:00:06.994485 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Oct 13 00:00:21.435355 systemd[1]: Created slice kubepods-besteffort-pod7da93a00_1ee4_4a05_8786_90430bbc2f31.slice - libcontainer container kubepods-besteffort-pod7da93a00_1ee4_4a05_8786_90430bbc2f31.slice. Oct 13 00:00:21.462055 kubelet[3330]: I1013 00:00:21.461900 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7da93a00-1ee4-4a05-8786-90430bbc2f31-tigera-ca-bundle\") pod \"calico-typha-6f6b69f877-9kk72\" (UID: \"7da93a00-1ee4-4a05-8786-90430bbc2f31\") " pod="calico-system/calico-typha-6f6b69f877-9kk72" Oct 13 00:00:21.462055 kubelet[3330]: I1013 00:00:21.461968 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7da93a00-1ee4-4a05-8786-90430bbc2f31-typha-certs\") pod \"calico-typha-6f6b69f877-9kk72\" (UID: \"7da93a00-1ee4-4a05-8786-90430bbc2f31\") " pod="calico-system/calico-typha-6f6b69f877-9kk72" Oct 13 00:00:21.462055 kubelet[3330]: I1013 00:00:21.462049 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g428j\" (UniqueName: \"kubernetes.io/projected/7da93a00-1ee4-4a05-8786-90430bbc2f31-kube-api-access-g428j\") pod \"calico-typha-6f6b69f877-9kk72\" (UID: \"7da93a00-1ee4-4a05-8786-90430bbc2f31\") " pod="calico-system/calico-typha-6f6b69f877-9kk72" Oct 13 00:00:21.751260 containerd[2018]: time="2025-10-13T00:00:21.751043644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f6b69f877-9kk72,Uid:7da93a00-1ee4-4a05-8786-90430bbc2f31,Namespace:calico-system,Attempt:0,}" Oct 13 00:00:21.817207 containerd[2018]: time="2025-10-13T00:00:21.817072049Z" level=info msg="connecting to shim a8070ef03e4fdd6ce4d11c857c42977818f89cc2750092e6f9c57c185d349145" address="unix:///run/containerd/s/a1c3942cb42611427cbab22a58e1f8bb560d8b59e344ce74a912f57e8c003a79" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:00:21.885050 systemd[1]: Started cri-containerd-a8070ef03e4fdd6ce4d11c857c42977818f89cc2750092e6f9c57c185d349145.scope - libcontainer container a8070ef03e4fdd6ce4d11c857c42977818f89cc2750092e6f9c57c185d349145. Oct 13 00:00:21.927197 systemd[1]: Created slice kubepods-besteffort-pode9ef2b83_52b7_4632_b5cf_472e12ba804b.slice - libcontainer container kubepods-besteffort-pode9ef2b83_52b7_4632_b5cf_472e12ba804b.slice. Oct 13 00:00:21.966547 kubelet[3330]: I1013 00:00:21.966488 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-cni-log-dir\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.966936 kubelet[3330]: I1013 00:00:21.966819 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-lib-modules\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.966936 kubelet[3330]: I1013 00:00:21.966888 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9ef2b83-52b7-4632-b5cf-472e12ba804b-node-certs\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967085 kubelet[3330]: I1013 00:00:21.966939 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9ef2b83-52b7-4632-b5cf-472e12ba804b-tigera-ca-bundle\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967085 kubelet[3330]: I1013 00:00:21.967000 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-var-run-calico\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967085 kubelet[3330]: I1013 00:00:21.967063 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-cni-net-dir\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967258 kubelet[3330]: I1013 00:00:21.967103 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-flexvol-driver-host\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967258 kubelet[3330]: I1013 00:00:21.967141 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9l7\" (UniqueName: \"kubernetes.io/projected/e9ef2b83-52b7-4632-b5cf-472e12ba804b-kube-api-access-5z9l7\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967258 kubelet[3330]: I1013 00:00:21.967181 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-policysync\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967258 kubelet[3330]: I1013 00:00:21.967218 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-var-lib-calico\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967258 kubelet[3330]: I1013 00:00:21.967251 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-xtables-lock\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:21.967500 kubelet[3330]: I1013 00:00:21.967286 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9ef2b83-52b7-4632-b5cf-472e12ba804b-cni-bin-dir\") pod \"calico-node-r8z86\" (UID: \"e9ef2b83-52b7-4632-b5cf-472e12ba804b\") " pod="calico-system/calico-node-r8z86" Oct 13 00:00:22.090386 kubelet[3330]: E1013 00:00:22.090161 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.090386 kubelet[3330]: W1013 00:00:22.090201 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.090386 kubelet[3330]: E1013 00:00:22.090235 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.115079 kubelet[3330]: E1013 00:00:22.115020 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.118086 kubelet[3330]: W1013 00:00:22.115203 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.118086 kubelet[3330]: E1013 00:00:22.115247 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.119549 kubelet[3330]: E1013 00:00:22.119482 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:22.129632 kubelet[3330]: E1013 00:00:22.129553 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.129632 kubelet[3330]: W1013 00:00:22.129589 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.129975 kubelet[3330]: E1013 00:00:22.129794 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.132933 kubelet[3330]: E1013 00:00:22.132069 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.132933 kubelet[3330]: W1013 00:00:22.132128 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.132933 kubelet[3330]: E1013 00:00:22.132866 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.134126 kubelet[3330]: E1013 00:00:22.133803 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.134261 containerd[2018]: time="2025-10-13T00:00:22.131707646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f6b69f877-9kk72,Uid:7da93a00-1ee4-4a05-8786-90430bbc2f31,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8070ef03e4fdd6ce4d11c857c42977818f89cc2750092e6f9c57c185d349145\"" Oct 13 00:00:22.134963 kubelet[3330]: W1013 00:00:22.134757 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.134963 kubelet[3330]: E1013 00:00:22.134920 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.137496 kubelet[3330]: E1013 00:00:22.137361 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.137496 kubelet[3330]: W1013 00:00:22.137415 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.137883 kubelet[3330]: E1013 00:00:22.137446 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.140267 kubelet[3330]: E1013 00:00:22.140233 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.142589 kubelet[3330]: W1013 00:00:22.140368 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.143016 kubelet[3330]: E1013 00:00:22.142809 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.143262 kubelet[3330]: E1013 00:00:22.143216 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.143477 kubelet[3330]: W1013 00:00:22.143451 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.143911 kubelet[3330]: E1013 00:00:22.143699 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.145204 containerd[2018]: time="2025-10-13T00:00:22.145156478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 00:00:22.147015 kubelet[3330]: E1013 00:00:22.146913 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.147576 kubelet[3330]: W1013 00:00:22.147394 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.147576 kubelet[3330]: E1013 00:00:22.147497 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.148257 kubelet[3330]: E1013 00:00:22.148226 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.148594 kubelet[3330]: W1013 00:00:22.148457 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.148745 kubelet[3330]: E1013 00:00:22.148497 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.149529 kubelet[3330]: E1013 00:00:22.149299 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.149529 kubelet[3330]: W1013 00:00:22.149327 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.149529 kubelet[3330]: E1013 00:00:22.149355 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.153659 kubelet[3330]: E1013 00:00:22.152799 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.153659 kubelet[3330]: W1013 00:00:22.152857 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.153659 kubelet[3330]: E1013 00:00:22.152889 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.154388 kubelet[3330]: E1013 00:00:22.154357 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.154945 kubelet[3330]: W1013 00:00:22.154802 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.155383 kubelet[3330]: E1013 00:00:22.155095 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.156251 kubelet[3330]: E1013 00:00:22.156200 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.156475 kubelet[3330]: W1013 00:00:22.156324 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.156745 kubelet[3330]: E1013 00:00:22.156581 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.158203 kubelet[3330]: E1013 00:00:22.157740 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.158203 kubelet[3330]: W1013 00:00:22.157775 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.158203 kubelet[3330]: E1013 00:00:22.157806 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.158941 kubelet[3330]: E1013 00:00:22.158897 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.159161 kubelet[3330]: W1013 00:00:22.159081 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.159161 kubelet[3330]: E1013 00:00:22.159120 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.160341 kubelet[3330]: E1013 00:00:22.160191 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.161096 kubelet[3330]: W1013 00:00:22.160222 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.161096 kubelet[3330]: E1013 00:00:22.160786 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.162399 kubelet[3330]: E1013 00:00:22.162356 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.163219 kubelet[3330]: W1013 00:00:22.162770 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.163219 kubelet[3330]: E1013 00:00:22.162814 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.165746 kubelet[3330]: E1013 00:00:22.165506 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.166383 kubelet[3330]: W1013 00:00:22.165915 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.166383 kubelet[3330]: E1013 00:00:22.165959 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.168455 kubelet[3330]: E1013 00:00:22.168416 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.169004 kubelet[3330]: W1013 00:00:22.168626 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.169004 kubelet[3330]: E1013 00:00:22.168668 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.169693 kubelet[3330]: E1013 00:00:22.169526 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.170409 kubelet[3330]: W1013 00:00:22.169619 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.170409 kubelet[3330]: E1013 00:00:22.170032 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.171863 kubelet[3330]: E1013 00:00:22.171195 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.172407 kubelet[3330]: W1013 00:00:22.172061 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.172407 kubelet[3330]: E1013 00:00:22.172114 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.174564 kubelet[3330]: E1013 00:00:22.174527 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.175037 kubelet[3330]: W1013 00:00:22.174799 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.175037 kubelet[3330]: E1013 00:00:22.174839 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.175037 kubelet[3330]: I1013 00:00:22.174896 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e00ccdfa-8874-41a5-8202-ed22b850ea32-kubelet-dir\") pod \"csi-node-driver-h6q2t\" (UID: \"e00ccdfa-8874-41a5-8202-ed22b850ea32\") " pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:00:22.175761 kubelet[3330]: E1013 00:00:22.175637 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.175761 kubelet[3330]: W1013 00:00:22.175670 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.176024 kubelet[3330]: E1013 00:00:22.175700 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.176024 kubelet[3330]: I1013 00:00:22.175961 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e00ccdfa-8874-41a5-8202-ed22b850ea32-registration-dir\") pod \"csi-node-driver-h6q2t\" (UID: \"e00ccdfa-8874-41a5-8202-ed22b850ea32\") " pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:00:22.176988 kubelet[3330]: E1013 00:00:22.176940 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.176988 kubelet[3330]: W1013 00:00:22.176978 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.178173 kubelet[3330]: E1013 00:00:22.177016 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.178838 kubelet[3330]: E1013 00:00:22.178784 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.178937 kubelet[3330]: W1013 00:00:22.178855 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.178937 kubelet[3330]: E1013 00:00:22.178892 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.180020 kubelet[3330]: E1013 00:00:22.179873 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.180020 kubelet[3330]: W1013 00:00:22.179911 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.180020 kubelet[3330]: E1013 00:00:22.179970 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.180946 kubelet[3330]: I1013 00:00:22.180835 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrdt\" (UniqueName: \"kubernetes.io/projected/e00ccdfa-8874-41a5-8202-ed22b850ea32-kube-api-access-mtrdt\") pod \"csi-node-driver-h6q2t\" (UID: \"e00ccdfa-8874-41a5-8202-ed22b850ea32\") " pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:00:22.181553 kubelet[3330]: E1013 00:00:22.181355 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.181553 kubelet[3330]: W1013 00:00:22.181546 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.181921 kubelet[3330]: E1013 00:00:22.181634 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.183230 kubelet[3330]: E1013 00:00:22.183080 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.183230 kubelet[3330]: W1013 00:00:22.183141 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.183230 kubelet[3330]: E1013 00:00:22.183174 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.185754 kubelet[3330]: E1013 00:00:22.184243 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.185754 kubelet[3330]: W1013 00:00:22.184768 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.185754 kubelet[3330]: E1013 00:00:22.184841 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.185754 kubelet[3330]: I1013 00:00:22.184926 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e00ccdfa-8874-41a5-8202-ed22b850ea32-socket-dir\") pod \"csi-node-driver-h6q2t\" (UID: \"e00ccdfa-8874-41a5-8202-ed22b850ea32\") " pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:00:22.187185 kubelet[3330]: E1013 00:00:22.186805 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.187185 kubelet[3330]: W1013 00:00:22.186893 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.187185 kubelet[3330]: E1013 00:00:22.186955 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.187185 kubelet[3330]: I1013 00:00:22.187043 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e00ccdfa-8874-41a5-8202-ed22b850ea32-varrun\") pod \"csi-node-driver-h6q2t\" (UID: \"e00ccdfa-8874-41a5-8202-ed22b850ea32\") " pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:00:22.187813 kubelet[3330]: E1013 00:00:22.187439 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.187813 kubelet[3330]: W1013 00:00:22.187461 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.187813 kubelet[3330]: E1013 00:00:22.187542 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.188317 kubelet[3330]: E1013 00:00:22.188157 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.188317 kubelet[3330]: W1013 00:00:22.188191 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.188317 kubelet[3330]: E1013 00:00:22.188221 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.190129 kubelet[3330]: E1013 00:00:22.190077 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.190129 kubelet[3330]: W1013 00:00:22.190116 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.190767 kubelet[3330]: E1013 00:00:22.190150 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.190767 kubelet[3330]: E1013 00:00:22.190544 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.190767 kubelet[3330]: W1013 00:00:22.190564 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.190767 kubelet[3330]: E1013 00:00:22.190588 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.191387 kubelet[3330]: E1013 00:00:22.191338 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.191387 kubelet[3330]: W1013 00:00:22.191375 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.192695 kubelet[3330]: E1013 00:00:22.191405 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.192695 kubelet[3330]: E1013 00:00:22.192471 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.192695 kubelet[3330]: W1013 00:00:22.192495 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.192695 kubelet[3330]: E1013 00:00:22.192523 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.240353 containerd[2018]: time="2025-10-13T00:00:22.240301371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r8z86,Uid:e9ef2b83-52b7-4632-b5cf-472e12ba804b,Namespace:calico-system,Attempt:0,}" Oct 13 00:00:22.289130 kubelet[3330]: E1013 00:00:22.289076 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.289130 kubelet[3330]: W1013 00:00:22.289118 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.289365 kubelet[3330]: E1013 00:00:22.289153 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.290751 kubelet[3330]: E1013 00:00:22.289906 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.290751 kubelet[3330]: W1013 00:00:22.289965 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.290751 kubelet[3330]: E1013 00:00:22.289993 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.291229 kubelet[3330]: E1013 00:00:22.291058 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.291229 kubelet[3330]: W1013 00:00:22.291129 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.291229 kubelet[3330]: E1013 00:00:22.291160 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.293508 kubelet[3330]: E1013 00:00:22.293442 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.293508 kubelet[3330]: W1013 00:00:22.293496 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.293508 kubelet[3330]: E1013 00:00:22.293531 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.294414 kubelet[3330]: E1013 00:00:22.294374 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.294414 kubelet[3330]: W1013 00:00:22.294408 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.294528 kubelet[3330]: E1013 00:00:22.294439 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.295813 kubelet[3330]: E1013 00:00:22.295092 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.295813 kubelet[3330]: W1013 00:00:22.295130 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.295813 kubelet[3330]: E1013 00:00:22.295157 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.296934 kubelet[3330]: E1013 00:00:22.296891 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.297046 kubelet[3330]: W1013 00:00:22.296940 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.297046 kubelet[3330]: E1013 00:00:22.296975 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.297671 kubelet[3330]: E1013 00:00:22.297630 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.297671 kubelet[3330]: W1013 00:00:22.297662 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.297890 kubelet[3330]: E1013 00:00:22.297690 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.299040 kubelet[3330]: E1013 00:00:22.298184 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.299040 kubelet[3330]: W1013 00:00:22.298205 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.299040 kubelet[3330]: E1013 00:00:22.298230 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.299940 kubelet[3330]: E1013 00:00:22.299890 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.299940 kubelet[3330]: W1013 00:00:22.299933 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.300123 kubelet[3330]: E1013 00:00:22.299967 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.300657 kubelet[3330]: E1013 00:00:22.300596 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.300657 kubelet[3330]: W1013 00:00:22.300630 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.300657 kubelet[3330]: E1013 00:00:22.300660 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.301952 kubelet[3330]: E1013 00:00:22.301428 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.301952 kubelet[3330]: W1013 00:00:22.301496 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.301952 kubelet[3330]: E1013 00:00:22.301531 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.302337 kubelet[3330]: E1013 00:00:22.302298 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.302337 kubelet[3330]: W1013 00:00:22.302330 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.302464 kubelet[3330]: E1013 00:00:22.302359 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.305087 kubelet[3330]: E1013 00:00:22.303964 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.305087 kubelet[3330]: W1013 00:00:22.304002 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.305087 kubelet[3330]: E1013 00:00:22.304037 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.305087 kubelet[3330]: E1013 00:00:22.304693 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.305087 kubelet[3330]: W1013 00:00:22.304915 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.305087 kubelet[3330]: E1013 00:00:22.304996 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.305748 kubelet[3330]: E1013 00:00:22.305694 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.306053 kubelet[3330]: W1013 00:00:22.305874 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.306053 kubelet[3330]: E1013 00:00:22.305918 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.307098 containerd[2018]: time="2025-10-13T00:00:22.306411279Z" level=info msg="connecting to shim 0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99" address="unix:///run/containerd/s/86e29293f329c8eca4a675d32d180b0661399bec45f293b310a8e93cea9fe49d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:00:22.308909 kubelet[3330]: E1013 00:00:22.308405 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.308909 kubelet[3330]: W1013 00:00:22.308444 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.308909 kubelet[3330]: E1013 00:00:22.308476 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.309135 kubelet[3330]: E1013 00:00:22.309110 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.309185 kubelet[3330]: W1013 00:00:22.309132 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.309185 kubelet[3330]: E1013 00:00:22.309158 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.310317 kubelet[3330]: E1013 00:00:22.310207 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.310317 kubelet[3330]: W1013 00:00:22.310244 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.310317 kubelet[3330]: E1013 00:00:22.310275 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.311147 kubelet[3330]: E1013 00:00:22.311110 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.311147 kubelet[3330]: W1013 00:00:22.311133 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.311240 kubelet[3330]: E1013 00:00:22.311160 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.313401 kubelet[3330]: E1013 00:00:22.311876 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.313401 kubelet[3330]: W1013 00:00:22.311915 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.313401 kubelet[3330]: E1013 00:00:22.311946 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.314778 kubelet[3330]: E1013 00:00:22.314609 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.314778 kubelet[3330]: W1013 00:00:22.314646 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.314778 kubelet[3330]: E1013 00:00:22.314680 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.318801 kubelet[3330]: E1013 00:00:22.315574 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.318801 kubelet[3330]: W1013 00:00:22.316440 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.318801 kubelet[3330]: E1013 00:00:22.316474 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.318801 kubelet[3330]: E1013 00:00:22.317523 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.318801 kubelet[3330]: W1013 00:00:22.317551 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.318801 kubelet[3330]: E1013 00:00:22.317582 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.318801 kubelet[3330]: E1013 00:00:22.318183 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.318801 kubelet[3330]: W1013 00:00:22.318207 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.318801 kubelet[3330]: E1013 00:00:22.318233 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.369180 kubelet[3330]: E1013 00:00:22.369020 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:22.369396 kubelet[3330]: W1013 00:00:22.369325 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:22.369891 kubelet[3330]: E1013 00:00:22.369371 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:22.404457 systemd[1]: Started cri-containerd-0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99.scope - libcontainer container 0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99. Oct 13 00:00:22.524361 containerd[2018]: time="2025-10-13T00:00:22.524277952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r8z86,Uid:e9ef2b83-52b7-4632-b5cf-472e12ba804b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99\"" Oct 13 00:00:23.437595 kubelet[3330]: E1013 00:00:23.437472 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:25.437752 kubelet[3330]: E1013 00:00:25.437094 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:27.436751 kubelet[3330]: E1013 00:00:27.436373 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:29.078967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599498838.mount: Deactivated successfully. Oct 13 00:00:29.438122 kubelet[3330]: E1013 00:00:29.437991 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:31.437618 kubelet[3330]: E1013 00:00:31.436953 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:32.765662 containerd[2018]: time="2025-10-13T00:00:32.765578079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:32.768520 containerd[2018]: time="2025-10-13T00:00:32.768448119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Oct 13 00:00:32.771148 containerd[2018]: time="2025-10-13T00:00:32.771073491Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:32.775587 containerd[2018]: time="2025-10-13T00:00:32.775509291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:32.778973 containerd[2018]: time="2025-10-13T00:00:32.778091439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 10.632609101s" Oct 13 00:00:32.778973 containerd[2018]: time="2025-10-13T00:00:32.778154667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Oct 13 00:00:32.781416 containerd[2018]: time="2025-10-13T00:00:32.781370751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 00:00:32.811998 containerd[2018]: time="2025-10-13T00:00:32.811941627Z" level=info msg="CreateContainer within sandbox \"a8070ef03e4fdd6ce4d11c857c42977818f89cc2750092e6f9c57c185d349145\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 00:00:32.841927 containerd[2018]: time="2025-10-13T00:00:32.839939523Z" level=info msg="Container 583372c03fe00b1d039cb84f9b2209455b77a99f58f293fef0efc2ec902aaf28: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:00:32.856907 containerd[2018]: time="2025-10-13T00:00:32.856830903Z" level=info msg="CreateContainer within sandbox \"a8070ef03e4fdd6ce4d11c857c42977818f89cc2750092e6f9c57c185d349145\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"583372c03fe00b1d039cb84f9b2209455b77a99f58f293fef0efc2ec902aaf28\"" Oct 13 00:00:32.857958 containerd[2018]: time="2025-10-13T00:00:32.857898891Z" level=info msg="StartContainer for \"583372c03fe00b1d039cb84f9b2209455b77a99f58f293fef0efc2ec902aaf28\"" Oct 13 00:00:32.863365 containerd[2018]: time="2025-10-13T00:00:32.862858815Z" level=info msg="connecting to shim 583372c03fe00b1d039cb84f9b2209455b77a99f58f293fef0efc2ec902aaf28" address="unix:///run/containerd/s/a1c3942cb42611427cbab22a58e1f8bb560d8b59e344ce74a912f57e8c003a79" protocol=ttrpc version=3 Oct 13 00:00:32.914022 systemd[1]: Started cri-containerd-583372c03fe00b1d039cb84f9b2209455b77a99f58f293fef0efc2ec902aaf28.scope - libcontainer container 583372c03fe00b1d039cb84f9b2209455b77a99f58f293fef0efc2ec902aaf28. Oct 13 00:00:32.993529 containerd[2018]: time="2025-10-13T00:00:32.992992096Z" level=info msg="StartContainer for \"583372c03fe00b1d039cb84f9b2209455b77a99f58f293fef0efc2ec902aaf28\" returns successfully" Oct 13 00:00:33.437587 kubelet[3330]: E1013 00:00:33.437514 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:33.690120 kubelet[3330]: I1013 00:00:33.688843 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f6b69f877-9kk72" podStartSLOduration=2.049469691 podStartE2EDuration="12.688822336s" podCreationTimestamp="2025-10-13 00:00:21 +0000 UTC" firstStartedPulling="2025-10-13 00:00:22.14130833 +0000 UTC m=+33.092599113" lastFinishedPulling="2025-10-13 00:00:32.780660975 +0000 UTC m=+43.731951758" observedRunningTime="2025-10-13 00:00:33.687905752 +0000 UTC m=+44.639196559" watchObservedRunningTime="2025-10-13 00:00:33.688822336 +0000 UTC m=+44.640113119" Oct 13 00:00:33.753416 kubelet[3330]: E1013 00:00:33.753253 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.753416 kubelet[3330]: W1013 00:00:33.753288 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.753416 kubelet[3330]: E1013 00:00:33.753320 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.755429 kubelet[3330]: E1013 00:00:33.755154 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.755429 kubelet[3330]: W1013 00:00:33.755188 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.755429 kubelet[3330]: E1013 00:00:33.755268 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.756775 kubelet[3330]: E1013 00:00:33.756682 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.757266 kubelet[3330]: W1013 00:00:33.756938 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.757266 kubelet[3330]: E1013 00:00:33.756978 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.758177 kubelet[3330]: E1013 00:00:33.758008 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.758177 kubelet[3330]: W1013 00:00:33.758039 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.758177 kubelet[3330]: E1013 00:00:33.758069 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.759811 kubelet[3330]: E1013 00:00:33.759682 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.759811 kubelet[3330]: W1013 00:00:33.759775 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.759811 kubelet[3330]: E1013 00:00:33.759809 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.760234 kubelet[3330]: E1013 00:00:33.760198 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.760234 kubelet[3330]: W1013 00:00:33.760227 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.760494 kubelet[3330]: E1013 00:00:33.760251 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.760967 kubelet[3330]: E1013 00:00:33.760917 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.760967 kubelet[3330]: W1013 00:00:33.760950 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.761106 kubelet[3330]: E1013 00:00:33.760978 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.761439 kubelet[3330]: E1013 00:00:33.761411 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.761527 kubelet[3330]: W1013 00:00:33.761437 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.761527 kubelet[3330]: E1013 00:00:33.761459 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.761897 kubelet[3330]: E1013 00:00:33.761868 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.761897 kubelet[3330]: W1013 00:00:33.761894 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.762013 kubelet[3330]: E1013 00:00:33.761919 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.762220 kubelet[3330]: E1013 00:00:33.762193 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.762284 kubelet[3330]: W1013 00:00:33.762218 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.762284 kubelet[3330]: E1013 00:00:33.762238 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.762567 kubelet[3330]: E1013 00:00:33.762541 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.762631 kubelet[3330]: W1013 00:00:33.762565 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.762631 kubelet[3330]: E1013 00:00:33.762585 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.763518 kubelet[3330]: E1013 00:00:33.763468 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.763518 kubelet[3330]: W1013 00:00:33.763500 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.763674 kubelet[3330]: E1013 00:00:33.763527 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.764117 kubelet[3330]: E1013 00:00:33.764055 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.764117 kubelet[3330]: W1013 00:00:33.764087 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.764117 kubelet[3330]: E1013 00:00:33.764112 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.764443 kubelet[3330]: E1013 00:00:33.764411 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.764443 kubelet[3330]: W1013 00:00:33.764438 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.764579 kubelet[3330]: E1013 00:00:33.764459 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.764866 kubelet[3330]: E1013 00:00:33.764834 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.764949 kubelet[3330]: W1013 00:00:33.764862 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.764949 kubelet[3330]: E1013 00:00:33.764885 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.788547 kubelet[3330]: E1013 00:00:33.788496 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.788547 kubelet[3330]: W1013 00:00:33.788530 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.788927 kubelet[3330]: E1013 00:00:33.788557 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.788927 kubelet[3330]: E1013 00:00:33.788916 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.789036 kubelet[3330]: W1013 00:00:33.788933 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.789036 kubelet[3330]: E1013 00:00:33.788954 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.789367 kubelet[3330]: E1013 00:00:33.789339 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.789448 kubelet[3330]: W1013 00:00:33.789365 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.789448 kubelet[3330]: E1013 00:00:33.789387 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.789868 kubelet[3330]: E1013 00:00:33.789837 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.789868 kubelet[3330]: W1013 00:00:33.789865 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.789974 kubelet[3330]: E1013 00:00:33.789889 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.790285 kubelet[3330]: E1013 00:00:33.790257 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.790285 kubelet[3330]: W1013 00:00:33.790282 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.790402 kubelet[3330]: E1013 00:00:33.790304 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.790695 kubelet[3330]: E1013 00:00:33.790669 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.790820 kubelet[3330]: W1013 00:00:33.790693 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.790820 kubelet[3330]: E1013 00:00:33.790745 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.791118 kubelet[3330]: E1013 00:00:33.791091 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.791197 kubelet[3330]: W1013 00:00:33.791116 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.791197 kubelet[3330]: E1013 00:00:33.791138 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.791472 kubelet[3330]: E1013 00:00:33.791445 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.791472 kubelet[3330]: W1013 00:00:33.791469 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.791643 kubelet[3330]: E1013 00:00:33.791489 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.791848 kubelet[3330]: E1013 00:00:33.791821 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.791930 kubelet[3330]: W1013 00:00:33.791846 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.791930 kubelet[3330]: E1013 00:00:33.791868 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.792292 kubelet[3330]: E1013 00:00:33.792259 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.792365 kubelet[3330]: W1013 00:00:33.792289 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.792365 kubelet[3330]: E1013 00:00:33.792313 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.793432 kubelet[3330]: E1013 00:00:33.793372 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.793432 kubelet[3330]: W1013 00:00:33.793410 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.793626 kubelet[3330]: E1013 00:00:33.793484 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.794137 kubelet[3330]: E1013 00:00:33.794086 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.794137 kubelet[3330]: W1013 00:00:33.794119 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.794297 kubelet[3330]: E1013 00:00:33.794146 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.794467 kubelet[3330]: E1013 00:00:33.794431 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.794467 kubelet[3330]: W1013 00:00:33.794459 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.794591 kubelet[3330]: E1013 00:00:33.794481 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.795107 kubelet[3330]: E1013 00:00:33.795076 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.795107 kubelet[3330]: W1013 00:00:33.795105 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.795232 kubelet[3330]: E1013 00:00:33.795128 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.795470 kubelet[3330]: E1013 00:00:33.795443 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.795556 kubelet[3330]: W1013 00:00:33.795468 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.795556 kubelet[3330]: E1013 00:00:33.795489 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.795963 kubelet[3330]: E1013 00:00:33.795907 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.795963 kubelet[3330]: W1013 00:00:33.795937 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.795963 kubelet[3330]: E1013 00:00:33.795961 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.796520 kubelet[3330]: E1013 00:00:33.796497 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.796637 kubelet[3330]: W1013 00:00:33.796615 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.796761 kubelet[3330]: E1013 00:00:33.796739 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:33.797231 kubelet[3330]: E1013 00:00:33.797208 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:33.797385 kubelet[3330]: W1013 00:00:33.797323 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:33.797385 kubelet[3330]: E1013 00:00:33.797349 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.771144 kubelet[3330]: E1013 00:00:34.771100 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.771144 kubelet[3330]: W1013 00:00:34.771136 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.771791 kubelet[3330]: E1013 00:00:34.771168 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.771791 kubelet[3330]: E1013 00:00:34.771424 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.771791 kubelet[3330]: W1013 00:00:34.771441 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.771791 kubelet[3330]: E1013 00:00:34.771460 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.771791 kubelet[3330]: E1013 00:00:34.771706 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.771791 kubelet[3330]: W1013 00:00:34.771764 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.771791 kubelet[3330]: E1013 00:00:34.771786 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.772117 kubelet[3330]: E1013 00:00:34.772028 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.772117 kubelet[3330]: W1013 00:00:34.772043 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.772117 kubelet[3330]: E1013 00:00:34.772061 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.772345 kubelet[3330]: E1013 00:00:34.772319 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.772414 kubelet[3330]: W1013 00:00:34.772343 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.772414 kubelet[3330]: E1013 00:00:34.772364 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.772636 kubelet[3330]: E1013 00:00:34.772610 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.772700 kubelet[3330]: W1013 00:00:34.772635 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.772700 kubelet[3330]: E1013 00:00:34.772654 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.772945 kubelet[3330]: E1013 00:00:34.772919 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.773016 kubelet[3330]: W1013 00:00:34.772946 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.773016 kubelet[3330]: E1013 00:00:34.772967 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.773237 kubelet[3330]: E1013 00:00:34.773211 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.773293 kubelet[3330]: W1013 00:00:34.773235 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.773293 kubelet[3330]: E1013 00:00:34.773255 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.773573 kubelet[3330]: E1013 00:00:34.773545 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.773647 kubelet[3330]: W1013 00:00:34.773571 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.773647 kubelet[3330]: E1013 00:00:34.773593 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.773943 kubelet[3330]: E1013 00:00:34.773916 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.773999 kubelet[3330]: W1013 00:00:34.773941 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.773999 kubelet[3330]: E1013 00:00:34.773962 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.774233 kubelet[3330]: E1013 00:00:34.774208 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.774308 kubelet[3330]: W1013 00:00:34.774232 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.774308 kubelet[3330]: E1013 00:00:34.774252 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.774525 kubelet[3330]: E1013 00:00:34.774500 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.774580 kubelet[3330]: W1013 00:00:34.774524 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.774580 kubelet[3330]: E1013 00:00:34.774547 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.774868 kubelet[3330]: E1013 00:00:34.774841 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.774868 kubelet[3330]: W1013 00:00:34.774866 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.775010 kubelet[3330]: E1013 00:00:34.774886 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.775263 kubelet[3330]: E1013 00:00:34.775235 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.775320 kubelet[3330]: W1013 00:00:34.775260 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.775320 kubelet[3330]: E1013 00:00:34.775284 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.775566 kubelet[3330]: E1013 00:00:34.775540 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.775634 kubelet[3330]: W1013 00:00:34.775563 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.775634 kubelet[3330]: E1013 00:00:34.775583 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.797586 kubelet[3330]: E1013 00:00:34.797545 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.797586 kubelet[3330]: W1013 00:00:34.797579 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.797789 kubelet[3330]: E1013 00:00:34.797608 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.798066 kubelet[3330]: E1013 00:00:34.798037 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.798184 kubelet[3330]: W1013 00:00:34.798064 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.798184 kubelet[3330]: E1013 00:00:34.798087 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.798422 kubelet[3330]: E1013 00:00:34.798396 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.798657 kubelet[3330]: W1013 00:00:34.798424 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.798657 kubelet[3330]: E1013 00:00:34.798445 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.798909 kubelet[3330]: E1013 00:00:34.798888 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.799023 kubelet[3330]: W1013 00:00:34.799001 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.799131 kubelet[3330]: E1013 00:00:34.799107 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.799564 kubelet[3330]: E1013 00:00:34.799498 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.799564 kubelet[3330]: W1013 00:00:34.799521 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.799564 kubelet[3330]: E1013 00:00:34.799541 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.800165 kubelet[3330]: E1013 00:00:34.800101 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.800165 kubelet[3330]: W1013 00:00:34.800122 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.800165 kubelet[3330]: E1013 00:00:34.800142 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.800864 kubelet[3330]: E1013 00:00:34.800797 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.800864 kubelet[3330]: W1013 00:00:34.800820 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.800864 kubelet[3330]: E1013 00:00:34.800841 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.801379 kubelet[3330]: E1013 00:00:34.801320 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.801379 kubelet[3330]: W1013 00:00:34.801340 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.801379 kubelet[3330]: E1013 00:00:34.801357 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.802010 kubelet[3330]: E1013 00:00:34.801945 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.802010 kubelet[3330]: W1013 00:00:34.801967 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.802010 kubelet[3330]: E1013 00:00:34.801987 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.802568 kubelet[3330]: E1013 00:00:34.802505 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.802568 kubelet[3330]: W1013 00:00:34.802525 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.802568 kubelet[3330]: E1013 00:00:34.802545 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.803131 kubelet[3330]: E1013 00:00:34.803111 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.803245 kubelet[3330]: W1013 00:00:34.803224 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.803347 kubelet[3330]: E1013 00:00:34.803327 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.804563 kubelet[3330]: E1013 00:00:34.804346 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.804563 kubelet[3330]: W1013 00:00:34.804377 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.804563 kubelet[3330]: E1013 00:00:34.804404 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.805087 kubelet[3330]: E1013 00:00:34.804961 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.805087 kubelet[3330]: W1013 00:00:34.804982 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.805087 kubelet[3330]: E1013 00:00:34.805003 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.805684 kubelet[3330]: E1013 00:00:34.805511 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.805684 kubelet[3330]: W1013 00:00:34.805536 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.805684 kubelet[3330]: E1013 00:00:34.805556 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.806191 kubelet[3330]: E1013 00:00:34.806170 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.806287 kubelet[3330]: W1013 00:00:34.806266 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.806398 kubelet[3330]: E1013 00:00:34.806377 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.806918 kubelet[3330]: E1013 00:00:34.806888 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.806918 kubelet[3330]: W1013 00:00:34.806916 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.807084 kubelet[3330]: E1013 00:00:34.806940 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.807637 kubelet[3330]: E1013 00:00:34.807492 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.807637 kubelet[3330]: W1013 00:00:34.807515 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.807637 kubelet[3330]: E1013 00:00:34.807537 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:34.808196 kubelet[3330]: E1013 00:00:34.808120 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:34.808196 kubelet[3330]: W1013 00:00:34.808140 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:34.808196 kubelet[3330]: E1013 00:00:34.808161 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.437210 kubelet[3330]: E1013 00:00:35.436688 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:35.682482 kubelet[3330]: E1013 00:00:35.682450 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.682688 kubelet[3330]: W1013 00:00:35.682663 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.682902 kubelet[3330]: E1013 00:00:35.682838 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.683437 kubelet[3330]: E1013 00:00:35.683400 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.683618 kubelet[3330]: W1013 00:00:35.683542 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.683618 kubelet[3330]: E1013 00:00:35.683572 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.684220 kubelet[3330]: E1013 00:00:35.684102 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.684220 kubelet[3330]: W1013 00:00:35.684153 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.684220 kubelet[3330]: E1013 00:00:35.684176 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.684997 kubelet[3330]: E1013 00:00:35.684803 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.684997 kubelet[3330]: W1013 00:00:35.684823 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.684997 kubelet[3330]: E1013 00:00:35.684844 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.685354 kubelet[3330]: E1013 00:00:35.685280 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.685354 kubelet[3330]: W1013 00:00:35.685302 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.685354 kubelet[3330]: E1013 00:00:35.685322 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.685961 kubelet[3330]: E1013 00:00:35.685933 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.686136 kubelet[3330]: W1013 00:00:35.686057 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.686136 kubelet[3330]: E1013 00:00:35.686081 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.686648 kubelet[3330]: E1013 00:00:35.686539 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.686648 kubelet[3330]: W1013 00:00:35.686582 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.686648 kubelet[3330]: E1013 00:00:35.686602 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.687365 kubelet[3330]: E1013 00:00:35.687221 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.687365 kubelet[3330]: W1013 00:00:35.687243 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.687365 kubelet[3330]: E1013 00:00:35.687266 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.688485 kubelet[3330]: E1013 00:00:35.688430 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.688676 kubelet[3330]: W1013 00:00:35.688455 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.688676 kubelet[3330]: E1013 00:00:35.688603 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.689303 kubelet[3330]: E1013 00:00:35.689126 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.689303 kubelet[3330]: W1013 00:00:35.689152 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.689303 kubelet[3330]: E1013 00:00:35.689173 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.689693 kubelet[3330]: E1013 00:00:35.689584 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.689693 kubelet[3330]: W1013 00:00:35.689603 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.689693 kubelet[3330]: E1013 00:00:35.689623 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.690223 kubelet[3330]: E1013 00:00:35.690173 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.690308 kubelet[3330]: W1013 00:00:35.690196 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.690405 kubelet[3330]: E1013 00:00:35.690383 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.690926 kubelet[3330]: E1013 00:00:35.690817 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.690926 kubelet[3330]: W1013 00:00:35.690838 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.690926 kubelet[3330]: E1013 00:00:35.690858 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.691458 kubelet[3330]: E1013 00:00:35.691353 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.691458 kubelet[3330]: W1013 00:00:35.691372 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.691458 kubelet[3330]: E1013 00:00:35.691392 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.692070 kubelet[3330]: E1013 00:00:35.691942 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.692070 kubelet[3330]: W1013 00:00:35.691962 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.692070 kubelet[3330]: E1013 00:00:35.691982 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.706094 kubelet[3330]: E1013 00:00:35.705951 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.706094 kubelet[3330]: W1013 00:00:35.705982 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.706094 kubelet[3330]: E1013 00:00:35.706011 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.706405 kubelet[3330]: E1013 00:00:35.706377 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.706465 kubelet[3330]: W1013 00:00:35.706404 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.706465 kubelet[3330]: E1013 00:00:35.706426 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.706864 kubelet[3330]: E1013 00:00:35.706820 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.706864 kubelet[3330]: W1013 00:00:35.706850 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.706987 kubelet[3330]: E1013 00:00:35.706873 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.707261 kubelet[3330]: E1013 00:00:35.707235 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.707322 kubelet[3330]: W1013 00:00:35.707260 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.707322 kubelet[3330]: E1013 00:00:35.707281 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.707623 kubelet[3330]: E1013 00:00:35.707597 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.707684 kubelet[3330]: W1013 00:00:35.707622 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.707684 kubelet[3330]: E1013 00:00:35.707643 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.708011 kubelet[3330]: E1013 00:00:35.707985 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.708069 kubelet[3330]: W1013 00:00:35.708010 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.708069 kubelet[3330]: E1013 00:00:35.708031 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.708440 kubelet[3330]: E1013 00:00:35.708376 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.708440 kubelet[3330]: W1013 00:00:35.708401 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.708440 kubelet[3330]: E1013 00:00:35.708422 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.708835 kubelet[3330]: E1013 00:00:35.708808 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.708898 kubelet[3330]: W1013 00:00:35.708834 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.708898 kubelet[3330]: E1013 00:00:35.708855 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.709255 kubelet[3330]: E1013 00:00:35.709227 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.709383 kubelet[3330]: W1013 00:00:35.709253 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.709383 kubelet[3330]: E1013 00:00:35.709275 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.709601 kubelet[3330]: E1013 00:00:35.709565 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.709601 kubelet[3330]: W1013 00:00:35.709593 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.710182 kubelet[3330]: E1013 00:00:35.709614 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.710182 kubelet[3330]: E1013 00:00:35.710023 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.710182 kubelet[3330]: W1013 00:00:35.710041 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.710182 kubelet[3330]: E1013 00:00:35.710062 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.710436 kubelet[3330]: E1013 00:00:35.710336 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.710436 kubelet[3330]: W1013 00:00:35.710351 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.710436 kubelet[3330]: E1013 00:00:35.710369 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.710663 kubelet[3330]: E1013 00:00:35.710637 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.710663 kubelet[3330]: W1013 00:00:35.710662 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.710663 kubelet[3330]: E1013 00:00:35.710682 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.711025 kubelet[3330]: E1013 00:00:35.710998 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.711082 kubelet[3330]: W1013 00:00:35.711023 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.711082 kubelet[3330]: E1013 00:00:35.711044 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.711584 kubelet[3330]: E1013 00:00:35.711537 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.711584 kubelet[3330]: W1013 00:00:35.711567 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.711901 kubelet[3330]: E1013 00:00:35.711589 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.712003 kubelet[3330]: E1013 00:00:35.711971 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.712105 kubelet[3330]: W1013 00:00:35.712000 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.712105 kubelet[3330]: E1013 00:00:35.712022 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.712621 kubelet[3330]: E1013 00:00:35.712593 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.712918 kubelet[3330]: W1013 00:00:35.712791 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.712918 kubelet[3330]: E1013 00:00:35.712828 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:35.713275 kubelet[3330]: E1013 00:00:35.713246 3330 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:00:35.713336 kubelet[3330]: W1013 00:00:35.713272 3330 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:00:35.713336 kubelet[3330]: E1013 00:00:35.713295 3330 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:00:37.438511 kubelet[3330]: E1013 00:00:37.438022 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:39.437703 kubelet[3330]: E1013 00:00:39.437035 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:41.443746 kubelet[3330]: E1013 00:00:41.443498 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:43.437255 kubelet[3330]: E1013 00:00:43.436776 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:45.438671 kubelet[3330]: E1013 00:00:45.436959 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:47.436995 kubelet[3330]: E1013 00:00:47.436490 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:47.688819 containerd[2018]: time="2025-10-13T00:00:47.688056269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:47.691874 containerd[2018]: time="2025-10-13T00:00:47.691825073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Oct 13 00:00:47.694085 containerd[2018]: time="2025-10-13T00:00:47.694040417Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:47.698394 containerd[2018]: time="2025-10-13T00:00:47.698308457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:47.699636 containerd[2018]: time="2025-10-13T00:00:47.699590057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 14.916976982s" Oct 13 00:00:47.699835 containerd[2018]: time="2025-10-13T00:00:47.699804029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 00:00:47.710982 containerd[2018]: time="2025-10-13T00:00:47.710490473Z" level=info msg="CreateContainer within sandbox \"0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 00:00:47.741061 containerd[2018]: time="2025-10-13T00:00:47.740994269Z" level=info msg="Container 199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:00:47.764396 containerd[2018]: time="2025-10-13T00:00:47.764346377Z" level=info msg="CreateContainer within sandbox \"0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea\"" Oct 13 00:00:47.766787 containerd[2018]: time="2025-10-13T00:00:47.765524837Z" level=info msg="StartContainer for \"199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea\"" Oct 13 00:00:47.768895 containerd[2018]: time="2025-10-13T00:00:47.768845609Z" level=info msg="connecting to shim 199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea" address="unix:///run/containerd/s/86e29293f329c8eca4a675d32d180b0661399bec45f293b310a8e93cea9fe49d" protocol=ttrpc version=3 Oct 13 00:00:47.810315 systemd[1]: Started cri-containerd-199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea.scope - libcontainer container 199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea. Oct 13 00:00:47.891400 containerd[2018]: time="2025-10-13T00:00:47.891346050Z" level=info msg="StartContainer for \"199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea\" returns successfully" Oct 13 00:00:47.923086 systemd[1]: cri-containerd-199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea.scope: Deactivated successfully. Oct 13 00:00:47.932159 containerd[2018]: time="2025-10-13T00:00:47.931492158Z" level=info msg="received exit event container_id:\"199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea\" id:\"199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea\" pid:4181 exited_at:{seconds:1760313647 nanos:930950670}" Oct 13 00:00:47.932834 containerd[2018]: time="2025-10-13T00:00:47.932625294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea\" id:\"199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea\" pid:4181 exited_at:{seconds:1760313647 nanos:930950670}" Oct 13 00:00:47.972276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-199f540b273a6e80d6916a0668c6905d9872919be50982553fdec9a15b76f3ea-rootfs.mount: Deactivated successfully. Oct 13 00:00:48.718607 containerd[2018]: time="2025-10-13T00:00:48.717408330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 00:00:49.437155 kubelet[3330]: E1013 00:00:49.436645 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:51.438608 kubelet[3330]: E1013 00:00:51.438089 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:53.438685 kubelet[3330]: E1013 00:00:53.438026 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:55.437038 kubelet[3330]: E1013 00:00:55.436684 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:56.299125 containerd[2018]: time="2025-10-13T00:00:56.299071404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:56.301324 containerd[2018]: time="2025-10-13T00:00:56.301263324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 00:00:56.303494 containerd[2018]: time="2025-10-13T00:00:56.303034200Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:56.305901 containerd[2018]: time="2025-10-13T00:00:56.305852220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:00:56.307512 containerd[2018]: time="2025-10-13T00:00:56.307453212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 7.589604242s" Oct 13 00:00:56.307825 containerd[2018]: time="2025-10-13T00:00:56.307508100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 00:00:56.324931 containerd[2018]: time="2025-10-13T00:00:56.324220884Z" level=info msg="CreateContainer within sandbox \"0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 00:00:56.338251 containerd[2018]: time="2025-10-13T00:00:56.338185224Z" level=info msg="Container 6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:00:56.344319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026967702.mount: Deactivated successfully. Oct 13 00:00:56.360083 containerd[2018]: time="2025-10-13T00:00:56.360004404Z" level=info msg="CreateContainer within sandbox \"0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab\"" Oct 13 00:00:56.361861 containerd[2018]: time="2025-10-13T00:00:56.361792572Z" level=info msg="StartContainer for \"6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab\"" Oct 13 00:00:56.365529 containerd[2018]: time="2025-10-13T00:00:56.365405196Z" level=info msg="connecting to shim 6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab" address="unix:///run/containerd/s/86e29293f329c8eca4a675d32d180b0661399bec45f293b310a8e93cea9fe49d" protocol=ttrpc version=3 Oct 13 00:00:56.408297 systemd[1]: Started cri-containerd-6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab.scope - libcontainer container 6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab. Oct 13 00:00:56.494795 containerd[2018]: time="2025-10-13T00:00:56.494703481Z" level=info msg="StartContainer for \"6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab\" returns successfully" Oct 13 00:00:57.438067 kubelet[3330]: E1013 00:00:57.437532 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:57.474269 containerd[2018]: time="2025-10-13T00:00:57.474199922Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:00:57.482009 systemd[1]: cri-containerd-6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab.scope: Deactivated successfully. Oct 13 00:00:57.482985 systemd[1]: cri-containerd-6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab.scope: Consumed 920ms CPU time, 185M memory peak, 165.8M written to disk. Oct 13 00:00:57.490103 containerd[2018]: time="2025-10-13T00:00:57.489978782Z" level=info msg="received exit event container_id:\"6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab\" id:\"6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab\" pid:4242 exited_at:{seconds:1760313657 nanos:487536434}" Oct 13 00:00:57.490434 containerd[2018]: time="2025-10-13T00:00:57.490239182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab\" id:\"6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab\" pid:4242 exited_at:{seconds:1760313657 nanos:487536434}" Oct 13 00:00:57.570153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6680fb62bdc3a53119f1b28f4bc64eb4017fa51194197521d1dd54d226592cab-rootfs.mount: Deactivated successfully. Oct 13 00:00:57.580114 kubelet[3330]: I1013 00:00:57.580050 3330 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 00:00:57.700555 systemd[1]: Created slice kubepods-burstable-pod422228b7_7530_44f9_8a2d_a1df9e389906.slice - libcontainer container kubepods-burstable-pod422228b7_7530_44f9_8a2d_a1df9e389906.slice. Oct 13 00:00:57.711228 kubelet[3330]: E1013 00:00:57.711162 3330 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-6q65z\" is forbidden: User \"system:node:ip-172-31-31-230\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-230' and this object" podUID="422228b7-7530-44f9-8a2d-a1df9e389906" pod="kube-system/coredns-66bc5c9577-6q65z" Oct 13 00:00:57.717013 kubelet[3330]: E1013 00:00:57.713214 3330 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-31-230\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-230' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Oct 13 00:00:57.728078 systemd[1]: Created slice kubepods-burstable-podb801fade_64ce_4951_a21e_a553877deb86.slice - libcontainer container kubepods-burstable-podb801fade_64ce_4951_a21e_a553877deb86.slice. Oct 13 00:00:57.769515 systemd[1]: Created slice kubepods-besteffort-poddf1be13a_c754_44d7_acb4_1da7bd922406.slice - libcontainer container kubepods-besteffort-poddf1be13a_c754_44d7_acb4_1da7bd922406.slice. Oct 13 00:00:57.774813 kubelet[3330]: I1013 00:00:57.774746 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6xr7\" (UniqueName: \"kubernetes.io/projected/b801fade-64ce-4951-a21e-a553877deb86-kube-api-access-t6xr7\") pod \"coredns-66bc5c9577-gbf22\" (UID: \"b801fade-64ce-4951-a21e-a553877deb86\") " pod="kube-system/coredns-66bc5c9577-gbf22" Oct 13 00:00:57.774952 kubelet[3330]: I1013 00:00:57.774819 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/422228b7-7530-44f9-8a2d-a1df9e389906-config-volume\") pod \"coredns-66bc5c9577-6q65z\" (UID: \"422228b7-7530-44f9-8a2d-a1df9e389906\") " pod="kube-system/coredns-66bc5c9577-6q65z" Oct 13 00:00:57.774952 kubelet[3330]: I1013 00:00:57.774867 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b801fade-64ce-4951-a21e-a553877deb86-config-volume\") pod \"coredns-66bc5c9577-gbf22\" (UID: \"b801fade-64ce-4951-a21e-a553877deb86\") " pod="kube-system/coredns-66bc5c9577-gbf22" Oct 13 00:00:57.774952 kubelet[3330]: I1013 00:00:57.774913 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcpdk\" (UniqueName: \"kubernetes.io/projected/422228b7-7530-44f9-8a2d-a1df9e389906-kube-api-access-bcpdk\") pod \"coredns-66bc5c9577-6q65z\" (UID: \"422228b7-7530-44f9-8a2d-a1df9e389906\") " pod="kube-system/coredns-66bc5c9577-6q65z" Oct 13 00:00:57.862245 systemd[1]: Created slice kubepods-besteffort-pod04c57b42_e794_43a6_9970_4992816d2fff.slice - libcontainer container kubepods-besteffort-pod04c57b42_e794_43a6_9970_4992816d2fff.slice. Oct 13 00:00:57.877106 kubelet[3330]: I1013 00:00:57.876960 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04c57b42-e794-43a6-9970-4992816d2fff-config\") pod \"goldmane-854f97d977-pp9j2\" (UID: \"04c57b42-e794-43a6-9970-4992816d2fff\") " pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:00:57.893940 kubelet[3330]: I1013 00:00:57.877246 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/04c57b42-e794-43a6-9970-4992816d2fff-goldmane-key-pair\") pod \"goldmane-854f97d977-pp9j2\" (UID: \"04c57b42-e794-43a6-9970-4992816d2fff\") " pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:00:57.893940 kubelet[3330]: I1013 00:00:57.877518 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2fxk\" (UniqueName: \"kubernetes.io/projected/df1be13a-c754-44d7-acb4-1da7bd922406-kube-api-access-d2fxk\") pod \"calico-kube-controllers-64d6998867-blpss\" (UID: \"df1be13a-c754-44d7-acb4-1da7bd922406\") " pod="calico-system/calico-kube-controllers-64d6998867-blpss" Oct 13 00:00:57.893940 kubelet[3330]: I1013 00:00:57.878582 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df1be13a-c754-44d7-acb4-1da7bd922406-tigera-ca-bundle\") pod \"calico-kube-controllers-64d6998867-blpss\" (UID: \"df1be13a-c754-44d7-acb4-1da7bd922406\") " pod="calico-system/calico-kube-controllers-64d6998867-blpss" Oct 13 00:00:57.893940 kubelet[3330]: I1013 00:00:57.879126 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgjk\" (UniqueName: \"kubernetes.io/projected/04c57b42-e794-43a6-9970-4992816d2fff-kube-api-access-hlgjk\") pod \"goldmane-854f97d977-pp9j2\" (UID: \"04c57b42-e794-43a6-9970-4992816d2fff\") " pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:00:57.893940 kubelet[3330]: I1013 00:00:57.879172 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04c57b42-e794-43a6-9970-4992816d2fff-goldmane-ca-bundle\") pod \"goldmane-854f97d977-pp9j2\" (UID: \"04c57b42-e794-43a6-9970-4992816d2fff\") " pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:00:57.953179 systemd[1]: Started sshd@7-172.31.31.230:22-139.178.89.65:50150.service - OpenSSH per-connection server daemon (139.178.89.65:50150). Oct 13 00:00:57.964071 systemd[1]: Created slice kubepods-besteffort-pod55154b5a_564e_4426_8465_982b6288d007.slice - libcontainer container kubepods-besteffort-pod55154b5a_564e_4426_8465_982b6288d007.slice. Oct 13 00:00:57.985773 kubelet[3330]: I1013 00:00:57.983979 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqw58\" (UniqueName: \"kubernetes.io/projected/55154b5a-564e-4426-8465-982b6288d007-kube-api-access-cqw58\") pod \"calico-apiserver-66885c755-k27mm\" (UID: \"55154b5a-564e-4426-8465-982b6288d007\") " pod="calico-apiserver/calico-apiserver-66885c755-k27mm" Oct 13 00:00:57.985773 kubelet[3330]: I1013 00:00:57.984083 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/55154b5a-564e-4426-8465-982b6288d007-calico-apiserver-certs\") pod \"calico-apiserver-66885c755-k27mm\" (UID: \"55154b5a-564e-4426-8465-982b6288d007\") " pod="calico-apiserver/calico-apiserver-66885c755-k27mm" Oct 13 00:00:58.060747 systemd[1]: Created slice kubepods-besteffort-pod355ad4db_b664_4420_b21a_f0cbc5ec56fc.slice - libcontainer container kubepods-besteffort-pod355ad4db_b664_4420_b21a_f0cbc5ec56fc.slice. Oct 13 00:00:58.085336 kubelet[3330]: I1013 00:00:58.085203 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9fv\" (UniqueName: \"kubernetes.io/projected/355ad4db-b664-4420-b21a-f0cbc5ec56fc-kube-api-access-9q9fv\") pod \"calico-apiserver-66885c755-gvbpr\" (UID: \"355ad4db-b664-4420-b21a-f0cbc5ec56fc\") " pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" Oct 13 00:00:58.085336 kubelet[3330]: I1013 00:00:58.085337 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/355ad4db-b664-4420-b21a-f0cbc5ec56fc-calico-apiserver-certs\") pod \"calico-apiserver-66885c755-gvbpr\" (UID: \"355ad4db-b664-4420-b21a-f0cbc5ec56fc\") " pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" Oct 13 00:00:58.126677 systemd[1]: Created slice kubepods-besteffort-pod166905be_c1b7_43c4_bca9_2277f7d3da47.slice - libcontainer container kubepods-besteffort-pod166905be_c1b7_43c4_bca9_2277f7d3da47.slice. Oct 13 00:00:58.186097 kubelet[3330]: I1013 00:00:58.186046 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-backend-key-pair\") pod \"whisker-7f5488c84c-4b69c\" (UID: \"166905be-c1b7-43c4-bca9-2277f7d3da47\") " pod="calico-system/whisker-7f5488c84c-4b69c" Oct 13 00:00:58.187000 kubelet[3330]: I1013 00:00:58.186951 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-ca-bundle\") pod \"whisker-7f5488c84c-4b69c\" (UID: \"166905be-c1b7-43c4-bca9-2277f7d3da47\") " pod="calico-system/whisker-7f5488c84c-4b69c" Oct 13 00:00:58.190641 kubelet[3330]: I1013 00:00:58.189843 3330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5rvl\" (UniqueName: \"kubernetes.io/projected/166905be-c1b7-43c4-bca9-2277f7d3da47-kube-api-access-f5rvl\") pod \"whisker-7f5488c84c-4b69c\" (UID: \"166905be-c1b7-43c4-bca9-2277f7d3da47\") " pod="calico-system/whisker-7f5488c84c-4b69c" Oct 13 00:00:58.198693 containerd[2018]: time="2025-10-13T00:00:58.198637921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-pp9j2,Uid:04c57b42-e794-43a6-9970-4992816d2fff,Namespace:calico-system,Attempt:0,}" Oct 13 00:00:58.246864 sshd[4277]: Accepted publickey for core from 139.178.89.65 port 50150 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:00:58.251630 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:00:58.266676 systemd-logind[1987]: New session 8 of user core. Oct 13 00:00:58.270992 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 00:00:58.305697 containerd[2018]: time="2025-10-13T00:00:58.305633174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-k27mm,Uid:55154b5a-564e-4426-8465-982b6288d007,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:00:58.380770 containerd[2018]: time="2025-10-13T00:00:58.380221322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-gvbpr,Uid:355ad4db-b664-4420-b21a-f0cbc5ec56fc,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:00:58.385301 containerd[2018]: time="2025-10-13T00:00:58.384852650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d6998867-blpss,Uid:df1be13a-c754-44d7-acb4-1da7bd922406,Namespace:calico-system,Attempt:0,}" Oct 13 00:00:58.441602 containerd[2018]: time="2025-10-13T00:00:58.441253598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5488c84c-4b69c,Uid:166905be-c1b7-43c4-bca9-2277f7d3da47,Namespace:calico-system,Attempt:0,}" Oct 13 00:00:58.605874 containerd[2018]: time="2025-10-13T00:00:58.604150623Z" level=error msg="Failed to destroy network for sandbox \"51d6955a98c19a1dcfa5645ee38a51652a1e3278e96b4292baeb2f539bacffe4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.614346 systemd[1]: run-netns-cni\x2dacc5d301\x2d2b6b\x2d8cd9\x2db7b3\x2d6b40577b7a26.mount: Deactivated successfully. Oct 13 00:00:58.622589 containerd[2018]: time="2025-10-13T00:00:58.615827415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-pp9j2,Uid:04c57b42-e794-43a6-9970-4992816d2fff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d6955a98c19a1dcfa5645ee38a51652a1e3278e96b4292baeb2f539bacffe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.622883 kubelet[3330]: E1013 00:00:58.622056 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d6955a98c19a1dcfa5645ee38a51652a1e3278e96b4292baeb2f539bacffe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.622883 kubelet[3330]: E1013 00:00:58.622139 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d6955a98c19a1dcfa5645ee38a51652a1e3278e96b4292baeb2f539bacffe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:00:58.622883 kubelet[3330]: E1013 00:00:58.622171 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d6955a98c19a1dcfa5645ee38a51652a1e3278e96b4292baeb2f539bacffe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:00:58.626382 kubelet[3330]: E1013 00:00:58.622262 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-pp9j2_calico-system(04c57b42-e794-43a6-9970-4992816d2fff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-pp9j2_calico-system(04c57b42-e794-43a6-9970-4992816d2fff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51d6955a98c19a1dcfa5645ee38a51652a1e3278e96b4292baeb2f539bacffe4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-pp9j2" podUID="04c57b42-e794-43a6-9970-4992816d2fff" Oct 13 00:00:58.659669 containerd[2018]: time="2025-10-13T00:00:58.659590300Z" level=error msg="Failed to destroy network for sandbox \"8b85386ca3f5e6dd5072473ec53980157b4235c31cc33141916301fccc648a02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.665400 containerd[2018]: time="2025-10-13T00:00:58.665303716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-k27mm,Uid:55154b5a-564e-4426-8465-982b6288d007,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b85386ca3f5e6dd5072473ec53980157b4235c31cc33141916301fccc648a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.668366 kubelet[3330]: E1013 00:00:58.666904 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b85386ca3f5e6dd5072473ec53980157b4235c31cc33141916301fccc648a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.668366 kubelet[3330]: E1013 00:00:58.666981 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b85386ca3f5e6dd5072473ec53980157b4235c31cc33141916301fccc648a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-k27mm" Oct 13 00:00:58.668366 kubelet[3330]: E1013 00:00:58.667017 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b85386ca3f5e6dd5072473ec53980157b4235c31cc33141916301fccc648a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-k27mm" Oct 13 00:00:58.668693 kubelet[3330]: E1013 00:00:58.667097 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66885c755-k27mm_calico-apiserver(55154b5a-564e-4426-8465-982b6288d007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66885c755-k27mm_calico-apiserver(55154b5a-564e-4426-8465-982b6288d007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b85386ca3f5e6dd5072473ec53980157b4235c31cc33141916301fccc648a02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66885c755-k27mm" podUID="55154b5a-564e-4426-8465-982b6288d007" Oct 13 00:00:58.669356 systemd[1]: run-netns-cni\x2dcc572010\x2dec5c\x2d83a8\x2d9506\x2dd6c0d9f2f727.mount: Deactivated successfully. Oct 13 00:00:58.744775 sshd[4298]: Connection closed by 139.178.89.65 port 50150 Oct 13 00:00:58.745536 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Oct 13 00:00:58.760788 systemd[1]: sshd@7-172.31.31.230:22-139.178.89.65:50150.service: Deactivated successfully. Oct 13 00:00:58.770341 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 00:00:58.775236 systemd-logind[1987]: Session 8 logged out. Waiting for processes to exit. Oct 13 00:00:58.779271 systemd-logind[1987]: Removed session 8. Oct 13 00:00:58.792232 containerd[2018]: time="2025-10-13T00:00:58.792139168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 00:00:58.815012 containerd[2018]: time="2025-10-13T00:00:58.814949836Z" level=error msg="Failed to destroy network for sandbox \"7e0a2cdaf466fadb56fbce164ba8e3adfce9ef771e8c4abb9684f58afec704b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.818772 containerd[2018]: time="2025-10-13T00:00:58.817958740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d6998867-blpss,Uid:df1be13a-c754-44d7-acb4-1da7bd922406,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0a2cdaf466fadb56fbce164ba8e3adfce9ef771e8c4abb9684f58afec704b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.820764 kubelet[3330]: E1013 00:00:58.820163 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0a2cdaf466fadb56fbce164ba8e3adfce9ef771e8c4abb9684f58afec704b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.824017 kubelet[3330]: E1013 00:00:58.820979 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0a2cdaf466fadb56fbce164ba8e3adfce9ef771e8c4abb9684f58afec704b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d6998867-blpss" Oct 13 00:00:58.824017 kubelet[3330]: E1013 00:00:58.821032 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e0a2cdaf466fadb56fbce164ba8e3adfce9ef771e8c4abb9684f58afec704b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d6998867-blpss" Oct 13 00:00:58.824017 kubelet[3330]: E1013 00:00:58.821125 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64d6998867-blpss_calico-system(df1be13a-c754-44d7-acb4-1da7bd922406)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64d6998867-blpss_calico-system(df1be13a-c754-44d7-acb4-1da7bd922406)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e0a2cdaf466fadb56fbce164ba8e3adfce9ef771e8c4abb9684f58afec704b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d6998867-blpss" podUID="df1be13a-c754-44d7-acb4-1da7bd922406" Oct 13 00:00:58.824518 systemd[1]: run-netns-cni\x2dac25cf6f\x2d6c6f\x2dd977\x2ddfcc\x2d6c67df7b18f8.mount: Deactivated successfully. Oct 13 00:00:58.858069 containerd[2018]: time="2025-10-13T00:00:58.857131097Z" level=error msg="Failed to destroy network for sandbox \"8fe417a6aecc695fd0846f591417c0e91369cccb4cb67c869e8a77822ee0dd22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.863064 containerd[2018]: time="2025-10-13T00:00:58.860973125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-gvbpr,Uid:355ad4db-b664-4420-b21a-f0cbc5ec56fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe417a6aecc695fd0846f591417c0e91369cccb4cb67c869e8a77822ee0dd22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.863246 kubelet[3330]: E1013 00:00:58.861298 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe417a6aecc695fd0846f591417c0e91369cccb4cb67c869e8a77822ee0dd22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.863246 kubelet[3330]: E1013 00:00:58.861369 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe417a6aecc695fd0846f591417c0e91369cccb4cb67c869e8a77822ee0dd22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" Oct 13 00:00:58.863246 kubelet[3330]: E1013 00:00:58.861419 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe417a6aecc695fd0846f591417c0e91369cccb4cb67c869e8a77822ee0dd22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" Oct 13 00:00:58.863424 kubelet[3330]: E1013 00:00:58.861502 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66885c755-gvbpr_calico-apiserver(355ad4db-b664-4420-b21a-f0cbc5ec56fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66885c755-gvbpr_calico-apiserver(355ad4db-b664-4420-b21a-f0cbc5ec56fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fe417a6aecc695fd0846f591417c0e91369cccb4cb67c869e8a77822ee0dd22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" podUID="355ad4db-b664-4420-b21a-f0cbc5ec56fc" Oct 13 00:00:58.865812 containerd[2018]: time="2025-10-13T00:00:58.863945537Z" level=error msg="Failed to destroy network for sandbox \"d663d506a620fbafd006d523fddefdaa158837e688f33aa464141272c4514984\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.868292 containerd[2018]: time="2025-10-13T00:00:58.868074617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5488c84c-4b69c,Uid:166905be-c1b7-43c4-bca9-2277f7d3da47,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d663d506a620fbafd006d523fddefdaa158837e688f33aa464141272c4514984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.869539 kubelet[3330]: E1013 00:00:58.868910 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d663d506a620fbafd006d523fddefdaa158837e688f33aa464141272c4514984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:58.869539 kubelet[3330]: E1013 00:00:58.868989 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d663d506a620fbafd006d523fddefdaa158837e688f33aa464141272c4514984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5488c84c-4b69c" Oct 13 00:00:58.869539 kubelet[3330]: E1013 00:00:58.869022 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d663d506a620fbafd006d523fddefdaa158837e688f33aa464141272c4514984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5488c84c-4b69c" Oct 13 00:00:58.870935 kubelet[3330]: E1013 00:00:58.869109 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f5488c84c-4b69c_calico-system(166905be-c1b7-43c4-bca9-2277f7d3da47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f5488c84c-4b69c_calico-system(166905be-c1b7-43c4-bca9-2277f7d3da47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d663d506a620fbafd006d523fddefdaa158837e688f33aa464141272c4514984\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f5488c84c-4b69c" podUID="166905be-c1b7-43c4-bca9-2277f7d3da47" Oct 13 00:00:58.870449 systemd[1]: run-netns-cni\x2dd2eea277\x2df230\x2d90c3\x2dd3ad\x2d8d7a9cd4ca8a.mount: Deactivated successfully. Oct 13 00:00:58.877702 kubelet[3330]: E1013 00:00:58.877642 3330 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:00:58.877702 kubelet[3330]: E1013 00:00:58.877813 3330 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b801fade-64ce-4951-a21e-a553877deb86-config-volume podName:b801fade-64ce-4951-a21e-a553877deb86 nodeName:}" failed. No retries permitted until 2025-10-13 00:00:59.377771553 +0000 UTC m=+70.329062348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b801fade-64ce-4951-a21e-a553877deb86-config-volume") pod "coredns-66bc5c9577-gbf22" (UID: "b801fade-64ce-4951-a21e-a553877deb86") : failed to sync configmap cache: timed out waiting for the condition Oct 13 00:00:58.878124 kubelet[3330]: E1013 00:00:58.877881 3330 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:00:58.878124 kubelet[3330]: E1013 00:00:58.877939 3330 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/422228b7-7530-44f9-8a2d-a1df9e389906-config-volume podName:422228b7-7530-44f9-8a2d-a1df9e389906 nodeName:}" failed. No retries permitted until 2025-10-13 00:00:59.377921529 +0000 UTC m=+70.329212324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/422228b7-7530-44f9-8a2d-a1df9e389906-config-volume") pod "coredns-66bc5c9577-6q65z" (UID: "422228b7-7530-44f9-8a2d-a1df9e389906") : failed to sync configmap cache: timed out waiting for the condition Oct 13 00:00:59.454816 systemd[1]: Created slice kubepods-besteffort-pode00ccdfa_8874_41a5_8202_ed22b850ea32.slice - libcontainer container kubepods-besteffort-pode00ccdfa_8874_41a5_8202_ed22b850ea32.slice. Oct 13 00:00:59.465100 containerd[2018]: time="2025-10-13T00:00:59.465030844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6q2t,Uid:e00ccdfa-8874-41a5-8202-ed22b850ea32,Namespace:calico-system,Attempt:0,}" Oct 13 00:00:59.522759 containerd[2018]: time="2025-10-13T00:00:59.522607972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6q65z,Uid:422228b7-7530-44f9-8a2d-a1df9e389906,Namespace:kube-system,Attempt:0,}" Oct 13 00:00:59.542659 containerd[2018]: time="2025-10-13T00:00:59.542594476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbf22,Uid:b801fade-64ce-4951-a21e-a553877deb86,Namespace:kube-system,Attempt:0,}" Oct 13 00:00:59.575595 systemd[1]: run-netns-cni\x2daa57b7ad\x2d7579\x2de7a7\x2de541\x2dc0aed6bd73c2.mount: Deactivated successfully. Oct 13 00:00:59.673454 containerd[2018]: time="2025-10-13T00:00:59.673098341Z" level=error msg="Failed to destroy network for sandbox \"e976dda6e76227ec3ea6f3c6e5c18f96d69ea7657fbd0add1cd4491703c4216b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.684381 containerd[2018]: time="2025-10-13T00:00:59.680924537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6q2t,Uid:e00ccdfa-8874-41a5-8202-ed22b850ea32,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e976dda6e76227ec3ea6f3c6e5c18f96d69ea7657fbd0add1cd4491703c4216b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.683105 systemd[1]: run-netns-cni\x2dbc950e8a\x2d265e\x2d4c05\x2d7062\x2d533715a8a07a.mount: Deactivated successfully. Oct 13 00:00:59.684949 kubelet[3330]: E1013 00:00:59.681299 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e976dda6e76227ec3ea6f3c6e5c18f96d69ea7657fbd0add1cd4491703c4216b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.684949 kubelet[3330]: E1013 00:00:59.681372 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e976dda6e76227ec3ea6f3c6e5c18f96d69ea7657fbd0add1cd4491703c4216b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:00:59.684949 kubelet[3330]: E1013 00:00:59.681422 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e976dda6e76227ec3ea6f3c6e5c18f96d69ea7657fbd0add1cd4491703c4216b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:00:59.685468 kubelet[3330]: E1013 00:00:59.681516 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6q2t_calico-system(e00ccdfa-8874-41a5-8202-ed22b850ea32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6q2t_calico-system(e00ccdfa-8874-41a5-8202-ed22b850ea32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e976dda6e76227ec3ea6f3c6e5c18f96d69ea7657fbd0add1cd4491703c4216b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:00:59.754946 containerd[2018]: time="2025-10-13T00:00:59.754867301Z" level=error msg="Failed to destroy network for sandbox \"f9c8e446b0dd56106c40dfdaf121e3b93e7881ac466609634866a5b1c9612a35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.761338 containerd[2018]: time="2025-10-13T00:00:59.758045813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6q65z,Uid:422228b7-7530-44f9-8a2d-a1df9e389906,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9c8e446b0dd56106c40dfdaf121e3b93e7881ac466609634866a5b1c9612a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.761572 kubelet[3330]: E1013 00:00:59.759935 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9c8e446b0dd56106c40dfdaf121e3b93e7881ac466609634866a5b1c9612a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.761572 kubelet[3330]: E1013 00:00:59.760005 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9c8e446b0dd56106c40dfdaf121e3b93e7881ac466609634866a5b1c9612a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6q65z" Oct 13 00:00:59.761572 kubelet[3330]: E1013 00:00:59.760046 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9c8e446b0dd56106c40dfdaf121e3b93e7881ac466609634866a5b1c9612a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6q65z" Oct 13 00:00:59.765586 kubelet[3330]: E1013 00:00:59.760140 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6q65z_kube-system(422228b7-7530-44f9-8a2d-a1df9e389906)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6q65z_kube-system(422228b7-7530-44f9-8a2d-a1df9e389906)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9c8e446b0dd56106c40dfdaf121e3b93e7881ac466609634866a5b1c9612a35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6q65z" podUID="422228b7-7530-44f9-8a2d-a1df9e389906" Oct 13 00:00:59.761961 systemd[1]: run-netns-cni\x2df3c783c1\x2dfba0\x2da2e4\x2dd596\x2dd4200e38efb5.mount: Deactivated successfully. Oct 13 00:00:59.794953 containerd[2018]: time="2025-10-13T00:00:59.794874989Z" level=error msg="Failed to destroy network for sandbox \"495cef69b86168bb7f8071d401aca5e7f349c98e94c659a4440259b58fa3f899\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.798325 containerd[2018]: time="2025-10-13T00:00:59.798245441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbf22,Uid:b801fade-64ce-4951-a21e-a553877deb86,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"495cef69b86168bb7f8071d401aca5e7f349c98e94c659a4440259b58fa3f899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.798786 kubelet[3330]: E1013 00:00:59.798655 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"495cef69b86168bb7f8071d401aca5e7f349c98e94c659a4440259b58fa3f899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:00:59.799070 kubelet[3330]: E1013 00:00:59.798785 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"495cef69b86168bb7f8071d401aca5e7f349c98e94c659a4440259b58fa3f899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gbf22" Oct 13 00:00:59.799070 kubelet[3330]: E1013 00:00:59.798844 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"495cef69b86168bb7f8071d401aca5e7f349c98e94c659a4440259b58fa3f899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gbf22" Oct 13 00:00:59.801686 kubelet[3330]: E1013 00:00:59.800801 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gbf22_kube-system(b801fade-64ce-4951-a21e-a553877deb86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gbf22_kube-system(b801fade-64ce-4951-a21e-a553877deb86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"495cef69b86168bb7f8071d401aca5e7f349c98e94c659a4440259b58fa3f899\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gbf22" podUID="b801fade-64ce-4951-a21e-a553877deb86" Oct 13 00:00:59.803947 systemd[1]: run-netns-cni\x2df941af71\x2d65a0\x2dc3d6\x2dc034\x2d584c6b7923d5.mount: Deactivated successfully. Oct 13 00:01:03.785906 systemd[1]: Started sshd@8-172.31.31.230:22-139.178.89.65:39670.service - OpenSSH per-connection server daemon (139.178.89.65:39670). Oct 13 00:01:03.991768 sshd[4508]: Accepted publickey for core from 139.178.89.65 port 39670 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:03.994073 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:04.002577 systemd-logind[1987]: New session 9 of user core. Oct 13 00:01:04.011245 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 00:01:04.254845 sshd[4511]: Connection closed by 139.178.89.65 port 39670 Oct 13 00:01:04.255624 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:04.262305 systemd[1]: sshd@8-172.31.31.230:22-139.178.89.65:39670.service: Deactivated successfully. Oct 13 00:01:04.266570 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 00:01:04.270852 systemd-logind[1987]: Session 9 logged out. Waiting for processes to exit. Oct 13 00:01:04.273191 systemd-logind[1987]: Removed session 9. Oct 13 00:01:09.300264 systemd[1]: Started sshd@9-172.31.31.230:22-139.178.89.65:39686.service - OpenSSH per-connection server daemon (139.178.89.65:39686). Oct 13 00:01:09.441609 containerd[2018]: time="2025-10-13T00:01:09.441555157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5488c84c-4b69c,Uid:166905be-c1b7-43c4-bca9-2277f7d3da47,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:09.501759 sshd[4524]: Accepted publickey for core from 139.178.89.65 port 39686 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:09.505309 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:09.514558 systemd-logind[1987]: New session 10 of user core. Oct 13 00:01:09.519092 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 00:01:09.577143 containerd[2018]: time="2025-10-13T00:01:09.576498494Z" level=error msg="Failed to destroy network for sandbox \"257ba1471cd321116204532d3af1cb1f5a6659391b9040cf5df945097fd11932\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:09.582236 containerd[2018]: time="2025-10-13T00:01:09.581522306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5488c84c-4b69c,Uid:166905be-c1b7-43c4-bca9-2277f7d3da47,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"257ba1471cd321116204532d3af1cb1f5a6659391b9040cf5df945097fd11932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:09.583354 kubelet[3330]: E1013 00:01:09.582787 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"257ba1471cd321116204532d3af1cb1f5a6659391b9040cf5df945097fd11932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:09.583354 kubelet[3330]: E1013 00:01:09.582865 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"257ba1471cd321116204532d3af1cb1f5a6659391b9040cf5df945097fd11932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5488c84c-4b69c" Oct 13 00:01:09.583354 kubelet[3330]: E1013 00:01:09.582898 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"257ba1471cd321116204532d3af1cb1f5a6659391b9040cf5df945097fd11932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5488c84c-4b69c" Oct 13 00:01:09.582941 systemd[1]: run-netns-cni\x2de71d6dae\x2d8235\x2d5adb\x2dab3c\x2d34a4c2a9dcd4.mount: Deactivated successfully. Oct 13 00:01:09.585115 kubelet[3330]: E1013 00:01:09.582976 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f5488c84c-4b69c_calico-system(166905be-c1b7-43c4-bca9-2277f7d3da47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f5488c84c-4b69c_calico-system(166905be-c1b7-43c4-bca9-2277f7d3da47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"257ba1471cd321116204532d3af1cb1f5a6659391b9040cf5df945097fd11932\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f5488c84c-4b69c" podUID="166905be-c1b7-43c4-bca9-2277f7d3da47" Oct 13 00:01:09.769454 sshd[4547]: Connection closed by 139.178.89.65 port 39686 Oct 13 00:01:09.770452 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:09.778203 systemd-logind[1987]: Session 10 logged out. Waiting for processes to exit. Oct 13 00:01:09.778286 systemd[1]: sshd@9-172.31.31.230:22-139.178.89.65:39686.service: Deactivated successfully. Oct 13 00:01:09.783091 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 00:01:09.787011 systemd-logind[1987]: Removed session 10. Oct 13 00:01:11.452953 containerd[2018]: time="2025-10-13T00:01:11.452887599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbf22,Uid:b801fade-64ce-4951-a21e-a553877deb86,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:11.467174 containerd[2018]: time="2025-10-13T00:01:11.467103675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d6998867-blpss,Uid:df1be13a-c754-44d7-acb4-1da7bd922406,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:11.468505 containerd[2018]: time="2025-10-13T00:01:11.467951991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-k27mm,Uid:55154b5a-564e-4426-8465-982b6288d007,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:11.471988 containerd[2018]: time="2025-10-13T00:01:11.471912651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-gvbpr,Uid:355ad4db-b664-4420-b21a-f0cbc5ec56fc,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:11.717164 containerd[2018]: time="2025-10-13T00:01:11.716928784Z" level=error msg="Failed to destroy network for sandbox \"b752ddc52e345da35498c0dd76d484d6fb9c6533f6b72a30031ded39ec2f8e4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.720776 containerd[2018]: time="2025-10-13T00:01:11.720367816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d6998867-blpss,Uid:df1be13a-c754-44d7-acb4-1da7bd922406,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b752ddc52e345da35498c0dd76d484d6fb9c6533f6b72a30031ded39ec2f8e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.725936 kubelet[3330]: E1013 00:01:11.724100 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b752ddc52e345da35498c0dd76d484d6fb9c6533f6b72a30031ded39ec2f8e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.725936 kubelet[3330]: E1013 00:01:11.724339 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b752ddc52e345da35498c0dd76d484d6fb9c6533f6b72a30031ded39ec2f8e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d6998867-blpss" Oct 13 00:01:11.725936 kubelet[3330]: E1013 00:01:11.724656 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b752ddc52e345da35498c0dd76d484d6fb9c6533f6b72a30031ded39ec2f8e4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d6998867-blpss" Oct 13 00:01:11.724624 systemd[1]: run-netns-cni\x2d35b59fc2\x2d8978\x2d820c\x2df90a\x2dba6b72d054eb.mount: Deactivated successfully. Oct 13 00:01:11.729988 kubelet[3330]: E1013 00:01:11.724798 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64d6998867-blpss_calico-system(df1be13a-c754-44d7-acb4-1da7bd922406)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64d6998867-blpss_calico-system(df1be13a-c754-44d7-acb4-1da7bd922406)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b752ddc52e345da35498c0dd76d484d6fb9c6533f6b72a30031ded39ec2f8e4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d6998867-blpss" podUID="df1be13a-c754-44d7-acb4-1da7bd922406" Oct 13 00:01:11.736296 containerd[2018]: time="2025-10-13T00:01:11.736193212Z" level=error msg="Failed to destroy network for sandbox \"969ad43da81b508827ce87c957b59a29fb562a6da34bddb08c4f40bee4b8d849\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.737798 containerd[2018]: time="2025-10-13T00:01:11.736231024Z" level=error msg="Failed to destroy network for sandbox \"b9d7f93b3039485b46e2157e7311882c21ee4707f2ecc178405a0501a6f972dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.740228 containerd[2018]: time="2025-10-13T00:01:11.740050769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-gvbpr,Uid:355ad4db-b664-4420-b21a-f0cbc5ec56fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"969ad43da81b508827ce87c957b59a29fb562a6da34bddb08c4f40bee4b8d849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.741881 systemd[1]: run-netns-cni\x2dea9c958d\x2da268\x2dcfd4\x2d109f\x2d256f05715d18.mount: Deactivated successfully. Oct 13 00:01:11.743361 kubelet[3330]: E1013 00:01:11.742043 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"969ad43da81b508827ce87c957b59a29fb562a6da34bddb08c4f40bee4b8d849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.743361 kubelet[3330]: E1013 00:01:11.742116 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"969ad43da81b508827ce87c957b59a29fb562a6da34bddb08c4f40bee4b8d849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" Oct 13 00:01:11.743361 kubelet[3330]: E1013 00:01:11.742172 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"969ad43da81b508827ce87c957b59a29fb562a6da34bddb08c4f40bee4b8d849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" Oct 13 00:01:11.745857 kubelet[3330]: E1013 00:01:11.742253 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66885c755-gvbpr_calico-apiserver(355ad4db-b664-4420-b21a-f0cbc5ec56fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66885c755-gvbpr_calico-apiserver(355ad4db-b664-4420-b21a-f0cbc5ec56fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"969ad43da81b508827ce87c957b59a29fb562a6da34bddb08c4f40bee4b8d849\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" podUID="355ad4db-b664-4420-b21a-f0cbc5ec56fc" Oct 13 00:01:11.748176 containerd[2018]: time="2025-10-13T00:01:11.747838001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbf22,Uid:b801fade-64ce-4951-a21e-a553877deb86,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d7f93b3039485b46e2157e7311882c21ee4707f2ecc178405a0501a6f972dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.749658 kubelet[3330]: E1013 00:01:11.749596 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d7f93b3039485b46e2157e7311882c21ee4707f2ecc178405a0501a6f972dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.749933 kubelet[3330]: E1013 00:01:11.749676 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d7f93b3039485b46e2157e7311882c21ee4707f2ecc178405a0501a6f972dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gbf22" Oct 13 00:01:11.749933 kubelet[3330]: E1013 00:01:11.749862 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d7f93b3039485b46e2157e7311882c21ee4707f2ecc178405a0501a6f972dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gbf22" Oct 13 00:01:11.750322 kubelet[3330]: E1013 00:01:11.750225 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gbf22_kube-system(b801fade-64ce-4951-a21e-a553877deb86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gbf22_kube-system(b801fade-64ce-4951-a21e-a553877deb86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9d7f93b3039485b46e2157e7311882c21ee4707f2ecc178405a0501a6f972dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gbf22" podUID="b801fade-64ce-4951-a21e-a553877deb86" Oct 13 00:01:11.753626 containerd[2018]: time="2025-10-13T00:01:11.753551993Z" level=error msg="Failed to destroy network for sandbox \"6b341de650bd8378c43872d0e83ac29b9225ccfbda94b4e4a4faaa906caeae91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.755470 containerd[2018]: time="2025-10-13T00:01:11.755255861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-k27mm,Uid:55154b5a-564e-4426-8465-982b6288d007,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b341de650bd8378c43872d0e83ac29b9225ccfbda94b4e4a4faaa906caeae91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.756216 kubelet[3330]: E1013 00:01:11.756139 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b341de650bd8378c43872d0e83ac29b9225ccfbda94b4e4a4faaa906caeae91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:11.756367 kubelet[3330]: E1013 00:01:11.756221 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b341de650bd8378c43872d0e83ac29b9225ccfbda94b4e4a4faaa906caeae91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-k27mm" Oct 13 00:01:11.756367 kubelet[3330]: E1013 00:01:11.756256 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b341de650bd8378c43872d0e83ac29b9225ccfbda94b4e4a4faaa906caeae91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66885c755-k27mm" Oct 13 00:01:11.756367 kubelet[3330]: E1013 00:01:11.756345 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66885c755-k27mm_calico-apiserver(55154b5a-564e-4426-8465-982b6288d007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66885c755-k27mm_calico-apiserver(55154b5a-564e-4426-8465-982b6288d007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b341de650bd8378c43872d0e83ac29b9225ccfbda94b4e4a4faaa906caeae91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66885c755-k27mm" podUID="55154b5a-564e-4426-8465-982b6288d007" Oct 13 00:01:12.462283 systemd[1]: run-netns-cni\x2ddf8f4fc5\x2d2de4\x2d1b1d\x2df62b\x2d8edef906ce58.mount: Deactivated successfully. Oct 13 00:01:12.462548 systemd[1]: run-netns-cni\x2dacaa09e5\x2de5b6\x2d03e6\x2d2a0c\x2d5e3ff8d3307e.mount: Deactivated successfully. Oct 13 00:01:13.447930 containerd[2018]: time="2025-10-13T00:01:13.447826637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6q2t,Uid:e00ccdfa-8874-41a5-8202-ed22b850ea32,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:13.450275 containerd[2018]: time="2025-10-13T00:01:13.450215261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-pp9j2,Uid:04c57b42-e794-43a6-9970-4992816d2fff,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:13.719640 containerd[2018]: time="2025-10-13T00:01:13.719113098Z" level=error msg="Failed to destroy network for sandbox \"7f4af25a6162c962ca3841acecdd05c37ca5386897cd7b59f9724822354c9da3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:13.728753 containerd[2018]: time="2025-10-13T00:01:13.726326094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-pp9j2,Uid:04c57b42-e794-43a6-9970-4992816d2fff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4af25a6162c962ca3841acecdd05c37ca5386897cd7b59f9724822354c9da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:13.728958 kubelet[3330]: E1013 00:01:13.728369 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4af25a6162c962ca3841acecdd05c37ca5386897cd7b59f9724822354c9da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:13.728958 kubelet[3330]: E1013 00:01:13.728447 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4af25a6162c962ca3841acecdd05c37ca5386897cd7b59f9724822354c9da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:01:13.728958 kubelet[3330]: E1013 00:01:13.728480 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4af25a6162c962ca3841acecdd05c37ca5386897cd7b59f9724822354c9da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-pp9j2" Oct 13 00:01:13.726938 systemd[1]: run-netns-cni\x2df3ffff4d\x2d01f2\x2d8564\x2d6022\x2db9b75814afdc.mount: Deactivated successfully. Oct 13 00:01:13.731509 kubelet[3330]: E1013 00:01:13.728554 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-pp9j2_calico-system(04c57b42-e794-43a6-9970-4992816d2fff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-pp9j2_calico-system(04c57b42-e794-43a6-9970-4992816d2fff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f4af25a6162c962ca3841acecdd05c37ca5386897cd7b59f9724822354c9da3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-pp9j2" podUID="04c57b42-e794-43a6-9970-4992816d2fff" Oct 13 00:01:13.780317 containerd[2018]: time="2025-10-13T00:01:13.779906263Z" level=error msg="Failed to destroy network for sandbox \"1ae9a76fc4dc6232fa4067e17eef32963f9cfbb93ea3bbe7f43a8f6738251656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:13.788546 containerd[2018]: time="2025-10-13T00:01:13.787833463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6q2t,Uid:e00ccdfa-8874-41a5-8202-ed22b850ea32,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae9a76fc4dc6232fa4067e17eef32963f9cfbb93ea3bbe7f43a8f6738251656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:13.788154 systemd[1]: run-netns-cni\x2d598e207d\x2d8b1b\x2d30c9\x2d3e61\x2d34a99ce4dd90.mount: Deactivated successfully. Oct 13 00:01:13.791990 kubelet[3330]: E1013 00:01:13.791256 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae9a76fc4dc6232fa4067e17eef32963f9cfbb93ea3bbe7f43a8f6738251656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:13.792298 kubelet[3330]: E1013 00:01:13.792099 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae9a76fc4dc6232fa4067e17eef32963f9cfbb93ea3bbe7f43a8f6738251656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:01:13.792298 kubelet[3330]: E1013 00:01:13.792281 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae9a76fc4dc6232fa4067e17eef32963f9cfbb93ea3bbe7f43a8f6738251656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6q2t" Oct 13 00:01:13.793181 kubelet[3330]: E1013 00:01:13.792643 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6q2t_calico-system(e00ccdfa-8874-41a5-8202-ed22b850ea32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6q2t_calico-system(e00ccdfa-8874-41a5-8202-ed22b850ea32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ae9a76fc4dc6232fa4067e17eef32963f9cfbb93ea3bbe7f43a8f6738251656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6q2t" podUID="e00ccdfa-8874-41a5-8202-ed22b850ea32" Oct 13 00:01:14.447619 containerd[2018]: time="2025-10-13T00:01:14.446581938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6q65z,Uid:422228b7-7530-44f9-8a2d-a1df9e389906,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:14.683311 containerd[2018]: time="2025-10-13T00:01:14.683193559Z" level=error msg="Failed to destroy network for sandbox \"61cf6c8abf68f1bf99a3d0d3faa89b990761962116e6e8f58665c2209a302bd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:14.688515 systemd[1]: run-netns-cni\x2d12284bff\x2d230d\x2da614\x2dd680\x2da5867f7eee6b.mount: Deactivated successfully. Oct 13 00:01:14.691743 containerd[2018]: time="2025-10-13T00:01:14.691453975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6q65z,Uid:422228b7-7530-44f9-8a2d-a1df9e389906,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cf6c8abf68f1bf99a3d0d3faa89b990761962116e6e8f58665c2209a302bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:14.693494 kubelet[3330]: E1013 00:01:14.692340 3330 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cf6c8abf68f1bf99a3d0d3faa89b990761962116e6e8f58665c2209a302bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:01:14.693494 kubelet[3330]: E1013 00:01:14.693034 3330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cf6c8abf68f1bf99a3d0d3faa89b990761962116e6e8f58665c2209a302bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6q65z" Oct 13 00:01:14.693494 kubelet[3330]: E1013 00:01:14.693129 3330 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61cf6c8abf68f1bf99a3d0d3faa89b990761962116e6e8f58665c2209a302bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6q65z" Oct 13 00:01:14.695376 kubelet[3330]: E1013 00:01:14.693463 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6q65z_kube-system(422228b7-7530-44f9-8a2d-a1df9e389906)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6q65z_kube-system(422228b7-7530-44f9-8a2d-a1df9e389906)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61cf6c8abf68f1bf99a3d0d3faa89b990761962116e6e8f58665c2209a302bd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6q65z" podUID="422228b7-7530-44f9-8a2d-a1df9e389906" Oct 13 00:01:14.811949 systemd[1]: Started sshd@10-172.31.31.230:22-139.178.89.65:59348.service - OpenSSH per-connection server daemon (139.178.89.65:59348). Oct 13 00:01:15.031337 sshd[4757]: Accepted publickey for core from 139.178.89.65 port 59348 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:15.035511 sshd-session[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:15.047126 systemd-logind[1987]: New session 11 of user core. Oct 13 00:01:15.053208 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 00:01:15.383423 sshd[4760]: Connection closed by 139.178.89.65 port 59348 Oct 13 00:01:15.382409 sshd-session[4757]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:15.394377 systemd[1]: sshd@10-172.31.31.230:22-139.178.89.65:59348.service: Deactivated successfully. Oct 13 00:01:15.403140 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 00:01:15.408179 systemd-logind[1987]: Session 11 logged out. Waiting for processes to exit. Oct 13 00:01:15.443292 systemd[1]: Started sshd@11-172.31.31.230:22-139.178.89.65:59354.service - OpenSSH per-connection server daemon (139.178.89.65:59354). Oct 13 00:01:15.450585 systemd-logind[1987]: Removed session 11. Oct 13 00:01:15.697289 sshd[4773]: Accepted publickey for core from 139.178.89.65 port 59354 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:15.702385 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:15.718946 systemd-logind[1987]: New session 12 of user core. Oct 13 00:01:15.728055 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 00:01:16.227266 sshd[4776]: Connection closed by 139.178.89.65 port 59354 Oct 13 00:01:16.229359 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:16.244385 systemd[1]: sshd@11-172.31.31.230:22-139.178.89.65:59354.service: Deactivated successfully. Oct 13 00:01:16.255256 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 00:01:16.288770 systemd-logind[1987]: Session 12 logged out. Waiting for processes to exit. Oct 13 00:01:16.292875 systemd[1]: Started sshd@12-172.31.31.230:22-139.178.89.65:59358.service - OpenSSH per-connection server daemon (139.178.89.65:59358). Oct 13 00:01:16.302858 systemd-logind[1987]: Removed session 12. Oct 13 00:01:16.505960 sshd[4786]: Accepted publickey for core from 139.178.89.65 port 59358 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:16.514927 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:16.542344 systemd-logind[1987]: New session 13 of user core. Oct 13 00:01:16.551020 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 00:01:16.921531 sshd[4789]: Connection closed by 139.178.89.65 port 59358 Oct 13 00:01:16.924989 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:16.942097 systemd[1]: sshd@12-172.31.31.230:22-139.178.89.65:59358.service: Deactivated successfully. Oct 13 00:01:16.953539 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 00:01:16.959863 systemd-logind[1987]: Session 13 logged out. Waiting for processes to exit. Oct 13 00:01:16.965799 systemd-logind[1987]: Removed session 13. Oct 13 00:01:18.584169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084599963.mount: Deactivated successfully. Oct 13 00:01:18.640582 containerd[2018]: time="2025-10-13T00:01:18.640488647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:18.642558 containerd[2018]: time="2025-10-13T00:01:18.642440063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 00:01:18.645096 containerd[2018]: time="2025-10-13T00:01:18.645019079Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:18.649148 containerd[2018]: time="2025-10-13T00:01:18.649098059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:18.650463 containerd[2018]: time="2025-10-13T00:01:18.650192855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 19.856860107s" Oct 13 00:01:18.650463 containerd[2018]: time="2025-10-13T00:01:18.650255015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 00:01:18.703414 containerd[2018]: time="2025-10-13T00:01:18.703356251Z" level=info msg="CreateContainer within sandbox \"0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 00:01:18.722378 containerd[2018]: time="2025-10-13T00:01:18.722312699Z" level=info msg="Container 738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:18.732476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160929939.mount: Deactivated successfully. Oct 13 00:01:18.749294 containerd[2018]: time="2025-10-13T00:01:18.749216327Z" level=info msg="CreateContainer within sandbox \"0cb458733e76b0cdf255bdaad9feb21c0e0644f41f16322ff85a34a391ea6b99\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\"" Oct 13 00:01:18.751115 containerd[2018]: time="2025-10-13T00:01:18.750110015Z" level=info msg="StartContainer for \"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\"" Oct 13 00:01:18.755161 containerd[2018]: time="2025-10-13T00:01:18.755043263Z" level=info msg="connecting to shim 738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66" address="unix:///run/containerd/s/86e29293f329c8eca4a675d32d180b0661399bec45f293b310a8e93cea9fe49d" protocol=ttrpc version=3 Oct 13 00:01:18.824603 systemd[1]: Started cri-containerd-738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66.scope - libcontainer container 738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66. Oct 13 00:01:18.979393 containerd[2018]: time="2025-10-13T00:01:18.979243212Z" level=info msg="StartContainer for \"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\" returns successfully" Oct 13 00:01:19.292251 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 00:01:19.292395 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 00:01:19.663734 kubelet[3330]: I1013 00:01:19.663524 3330 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-backend-key-pair\") pod \"166905be-c1b7-43c4-bca9-2277f7d3da47\" (UID: \"166905be-c1b7-43c4-bca9-2277f7d3da47\") " Oct 13 00:01:19.666512 kubelet[3330]: I1013 00:01:19.665843 3330 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-ca-bundle\") pod \"166905be-c1b7-43c4-bca9-2277f7d3da47\" (UID: \"166905be-c1b7-43c4-bca9-2277f7d3da47\") " Oct 13 00:01:19.666512 kubelet[3330]: I1013 00:01:19.665918 3330 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5rvl\" (UniqueName: \"kubernetes.io/projected/166905be-c1b7-43c4-bca9-2277f7d3da47-kube-api-access-f5rvl\") pod \"166905be-c1b7-43c4-bca9-2277f7d3da47\" (UID: \"166905be-c1b7-43c4-bca9-2277f7d3da47\") " Oct 13 00:01:19.672263 kubelet[3330]: I1013 00:01:19.672189 3330 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "166905be-c1b7-43c4-bca9-2277f7d3da47" (UID: "166905be-c1b7-43c4-bca9-2277f7d3da47"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:01:19.674771 systemd[1]: var-lib-kubelet-pods-166905be\x2dc1b7\x2d43c4\x2dbca9\x2d2277f7d3da47-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 00:01:19.679571 kubelet[3330]: I1013 00:01:19.679006 3330 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "166905be-c1b7-43c4-bca9-2277f7d3da47" (UID: "166905be-c1b7-43c4-bca9-2277f7d3da47"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:01:19.688779 kubelet[3330]: I1013 00:01:19.688658 3330 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/166905be-c1b7-43c4-bca9-2277f7d3da47-kube-api-access-f5rvl" (OuterVolumeSpecName: "kube-api-access-f5rvl") pod "166905be-c1b7-43c4-bca9-2277f7d3da47" (UID: "166905be-c1b7-43c4-bca9-2277f7d3da47"). InnerVolumeSpecName "kube-api-access-f5rvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:01:19.689454 systemd[1]: var-lib-kubelet-pods-166905be\x2dc1b7\x2d43c4\x2dbca9\x2d2277f7d3da47-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df5rvl.mount: Deactivated successfully. Oct 13 00:01:19.767350 kubelet[3330]: I1013 00:01:19.767240 3330 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f5rvl\" (UniqueName: \"kubernetes.io/projected/166905be-c1b7-43c4-bca9-2277f7d3da47-kube-api-access-f5rvl\") on node \"ip-172-31-31-230\" DevicePath \"\"" Oct 13 00:01:19.767350 kubelet[3330]: I1013 00:01:19.767288 3330 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-backend-key-pair\") on node \"ip-172-31-31-230\" DevicePath \"\"" Oct 13 00:01:19.767350 kubelet[3330]: I1013 00:01:19.767313 3330 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/166905be-c1b7-43c4-bca9-2277f7d3da47-whisker-ca-bundle\") on node \"ip-172-31-31-230\" DevicePath \"\"" Oct 13 00:01:19.950571 systemd[1]: Removed slice kubepods-besteffort-pod166905be_c1b7_43c4_bca9_2277f7d3da47.slice - libcontainer container kubepods-besteffort-pod166905be_c1b7_43c4_bca9_2277f7d3da47.slice. Oct 13 00:01:20.000655 kubelet[3330]: I1013 00:01:20.000432 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r8z86" podStartSLOduration=2.879827791 podStartE2EDuration="59.000255478s" podCreationTimestamp="2025-10-13 00:00:21 +0000 UTC" firstStartedPulling="2025-10-13 00:00:22.531422488 +0000 UTC m=+33.482713283" lastFinishedPulling="2025-10-13 00:01:18.651850187 +0000 UTC m=+89.603140970" observedRunningTime="2025-10-13 00:01:19.998320958 +0000 UTC m=+90.949611777" watchObservedRunningTime="2025-10-13 00:01:20.000255478 +0000 UTC m=+90.951546297" Oct 13 00:01:20.259661 containerd[2018]: time="2025-10-13T00:01:20.259601711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\" id:\"273013578c3b95866beca784be65a4165e3c98a0a14be9721390ecc80a71ab7c\" pid:4864 exit_status:1 exited_at:{seconds:1760313680 nanos:259165451}" Oct 13 00:01:21.062065 containerd[2018]: time="2025-10-13T00:01:21.062008355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\" id:\"a13e858d5863d34ebd080831cbabe3f0bd997052ee462555d0728e901303db14\" pid:4898 exit_status:1 exited_at:{seconds:1760313681 nanos:61538339}" Oct 13 00:01:21.446511 kubelet[3330]: I1013 00:01:21.446366 3330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="166905be-c1b7-43c4-bca9-2277f7d3da47" path="/var/lib/kubelet/pods/166905be-c1b7-43c4-bca9-2277f7d3da47/volumes" Oct 13 00:01:21.963694 systemd[1]: Started sshd@13-172.31.31.230:22-139.178.89.65:59368.service - OpenSSH per-connection server daemon (139.178.89.65:59368). Oct 13 00:01:22.165258 sshd[5034]: Accepted publickey for core from 139.178.89.65 port 59368 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:22.169411 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:22.184077 systemd-logind[1987]: New session 14 of user core. Oct 13 00:01:22.193070 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 00:01:22.444763 containerd[2018]: time="2025-10-13T00:01:22.444333626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d6998867-blpss,Uid:df1be13a-c754-44d7-acb4-1da7bd922406,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:22.692460 sshd[5045]: Connection closed by 139.178.89.65 port 59368 Oct 13 00:01:22.693348 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:22.707129 systemd[1]: sshd@13-172.31.31.230:22-139.178.89.65:59368.service: Deactivated successfully. Oct 13 00:01:22.718017 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 00:01:22.724402 systemd-logind[1987]: Session 14 logged out. Waiting for processes to exit. Oct 13 00:01:22.732138 systemd-logind[1987]: Removed session 14. Oct 13 00:01:22.776991 (udev-worker)[4843]: Network interface NamePolicy= disabled on kernel command line. Oct 13 00:01:22.834700 systemd-networkd[1898]: vxlan.calico: Link UP Oct 13 00:01:22.837917 systemd-networkd[1898]: vxlan.calico: Gained carrier Oct 13 00:01:22.929568 (udev-worker)[4842]: Network interface NamePolicy= disabled on kernel command line. Oct 13 00:01:23.019821 systemd-networkd[1898]: cali00c6a11f6c5: Link UP Oct 13 00:01:23.022261 systemd-networkd[1898]: cali00c6a11f6c5: Gained carrier Oct 13 00:01:23.052883 containerd[2018]: 2025-10-13 00:01:22.679 [INFO][5063] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0 calico-kube-controllers-64d6998867- calico-system df1be13a-c754-44d7-acb4-1da7bd922406 941 0 2025-10-13 00:00:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64d6998867 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-230 calico-kube-controllers-64d6998867-blpss eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali00c6a11f6c5 [] [] }} ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-" Oct 13 00:01:23.052883 containerd[2018]: 2025-10-13 00:01:22.679 [INFO][5063] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" Oct 13 00:01:23.052883 containerd[2018]: 2025-10-13 00:01:22.833 [INFO][5073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" HandleID="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Workload="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.835 [INFO][5073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" HandleID="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Workload="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d970), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-230", "pod":"calico-kube-controllers-64d6998867-blpss", "timestamp":"2025-10-13 00:01:22.8331427 +0000 UTC"}, Hostname:"ip-172-31-31-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.835 [INFO][5073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.835 [INFO][5073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.838 [INFO][5073] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-230' Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.870 [INFO][5073] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" host="ip-172-31-31-230" Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.882 [INFO][5073] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-230" Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.899 [INFO][5073] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.913 [INFO][5073] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:23.053196 containerd[2018]: 2025-10-13 00:01:22.927 [INFO][5073] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:23.053670 containerd[2018]: 2025-10-13 00:01:22.927 [INFO][5073] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" host="ip-172-31-31-230" Oct 13 00:01:23.053670 containerd[2018]: 2025-10-13 00:01:22.943 [INFO][5073] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2 Oct 13 00:01:23.053670 containerd[2018]: 2025-10-13 00:01:22.971 [INFO][5073] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" host="ip-172-31-31-230" Oct 13 00:01:23.053670 containerd[2018]: 2025-10-13 00:01:22.990 [INFO][5073] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.5.193/26] block=192.168.5.192/26 handle="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" host="ip-172-31-31-230" Oct 13 00:01:23.053670 containerd[2018]: 2025-10-13 00:01:22.990 [INFO][5073] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.193/26] handle="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" host="ip-172-31-31-230" Oct 13 00:01:23.053670 containerd[2018]: 2025-10-13 00:01:22.990 [INFO][5073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:23.053670 containerd[2018]: 2025-10-13 00:01:22.991 [INFO][5073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.193/26] IPv6=[] ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" HandleID="k8s-pod-network.215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Workload="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" Oct 13 00:01:23.056514 containerd[2018]: 2025-10-13 00:01:23.005 [INFO][5063] cni-plugin/k8s.go 418: Populated endpoint ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0", GenerateName:"calico-kube-controllers-64d6998867-", Namespace:"calico-system", SelfLink:"", UID:"df1be13a-c754-44d7-acb4-1da7bd922406", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d6998867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"", Pod:"calico-kube-controllers-64d6998867-blpss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00c6a11f6c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:23.056655 containerd[2018]: 2025-10-13 00:01:23.006 [INFO][5063] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.193/32] ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" Oct 13 00:01:23.056655 containerd[2018]: 2025-10-13 00:01:23.006 [INFO][5063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00c6a11f6c5 ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" Oct 13 00:01:23.056655 containerd[2018]: 2025-10-13 00:01:23.022 [INFO][5063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" Oct 13 00:01:23.057340 containerd[2018]: 2025-10-13 00:01:23.023 [INFO][5063] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0", GenerateName:"calico-kube-controllers-64d6998867-", Namespace:"calico-system", SelfLink:"", UID:"df1be13a-c754-44d7-acb4-1da7bd922406", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d6998867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2", Pod:"calico-kube-controllers-64d6998867-blpss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00c6a11f6c5", MAC:"22:6c:07:3e:35:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:23.057498 containerd[2018]: 2025-10-13 00:01:23.044 [INFO][5063] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" Namespace="calico-system" Pod="calico-kube-controllers-64d6998867-blpss" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--kube--controllers--64d6998867--blpss-eth0" Oct 13 00:01:23.173756 containerd[2018]: time="2025-10-13T00:01:23.172489921Z" level=info msg="connecting to shim 215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2" address="unix:///run/containerd/s/9d87840f7dc00ca3f27f5bf007313e12ab8c7160c4291f042a5d995bd98370eb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:23.235636 systemd[1]: Started cri-containerd-215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2.scope - libcontainer container 215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2. Oct 13 00:01:23.447133 containerd[2018]: time="2025-10-13T00:01:23.447068043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbf22,Uid:b801fade-64ce-4951-a21e-a553877deb86,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:23.523463 containerd[2018]: time="2025-10-13T00:01:23.523103415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d6998867-blpss,Uid:df1be13a-c754-44d7-acb4-1da7bd922406,Namespace:calico-system,Attempt:0,} returns sandbox id \"215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2\"" Oct 13 00:01:23.533059 containerd[2018]: time="2025-10-13T00:01:23.532978299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 00:01:23.794502 systemd-networkd[1898]: cali07485853508: Link UP Oct 13 00:01:23.798146 systemd-networkd[1898]: cali07485853508: Gained carrier Oct 13 00:01:23.835032 containerd[2018]: 2025-10-13 00:01:23.601 [INFO][5157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0 coredns-66bc5c9577- kube-system b801fade-64ce-4951-a21e-a553877deb86 935 0 2025-10-12 23:59:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-230 coredns-66bc5c9577-gbf22 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali07485853508 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-" Oct 13 00:01:23.835032 containerd[2018]: 2025-10-13 00:01:23.604 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" Oct 13 00:01:23.835032 containerd[2018]: 2025-10-13 00:01:23.695 [INFO][5170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" HandleID="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Workload="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.695 [INFO][5170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" HandleID="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Workload="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031edc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-230", "pod":"coredns-66bc5c9577-gbf22", "timestamp":"2025-10-13 00:01:23.695149972 +0000 UTC"}, Hostname:"ip-172-31-31-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.695 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.695 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.696 [INFO][5170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-230' Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.715 [INFO][5170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" host="ip-172-31-31-230" Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.730 [INFO][5170] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-230" Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.744 [INFO][5170] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.749 [INFO][5170] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:23.835396 containerd[2018]: 2025-10-13 00:01:23.755 [INFO][5170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:23.841104 containerd[2018]: 2025-10-13 00:01:23.755 [INFO][5170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" host="ip-172-31-31-230" Oct 13 00:01:23.841104 containerd[2018]: 2025-10-13 00:01:23.759 [INFO][5170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc Oct 13 00:01:23.841104 containerd[2018]: 2025-10-13 00:01:23.768 [INFO][5170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" host="ip-172-31-31-230" Oct 13 00:01:23.841104 containerd[2018]: 2025-10-13 00:01:23.781 [INFO][5170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.5.194/26] block=192.168.5.192/26 handle="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" host="ip-172-31-31-230" Oct 13 00:01:23.841104 containerd[2018]: 2025-10-13 00:01:23.781 [INFO][5170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.194/26] handle="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" host="ip-172-31-31-230" Oct 13 00:01:23.841104 containerd[2018]: 2025-10-13 00:01:23.781 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:23.841104 containerd[2018]: 2025-10-13 00:01:23.781 [INFO][5170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.194/26] IPv6=[] ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" HandleID="k8s-pod-network.475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Workload="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" Oct 13 00:01:23.841580 containerd[2018]: 2025-10-13 00:01:23.786 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b801fade-64ce-4951-a21e-a553877deb86", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.October, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"", Pod:"coredns-66bc5c9577-gbf22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07485853508", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:23.841580 containerd[2018]: 2025-10-13 00:01:23.786 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.194/32] ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" Oct 13 00:01:23.841580 containerd[2018]: 2025-10-13 00:01:23.786 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07485853508 ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" Oct 13 00:01:23.841580 containerd[2018]: 2025-10-13 00:01:23.796 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" Oct 13 00:01:23.841580 containerd[2018]: 2025-10-13 00:01:23.797 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b801fade-64ce-4951-a21e-a553877deb86", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.October, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc", Pod:"coredns-66bc5c9577-gbf22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07485853508", MAC:"ae:46:2c:68:44:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:23.841580 containerd[2018]: 2025-10-13 00:01:23.828 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" Namespace="kube-system" Pod="coredns-66bc5c9577-gbf22" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--gbf22-eth0" Oct 13 00:01:23.961237 containerd[2018]: time="2025-10-13T00:01:23.961149713Z" level=info msg="connecting to shim 475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc" address="unix:///run/containerd/s/5b1e3a13b986a578c931964069a76a796185e4e1704b26140ec97803f81e5489" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:24.043115 systemd[1]: Started cri-containerd-475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc.scope - libcontainer container 475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc. Oct 13 00:01:24.178275 containerd[2018]: time="2025-10-13T00:01:24.178201130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbf22,Uid:b801fade-64ce-4951-a21e-a553877deb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc\"" Oct 13 00:01:24.204286 containerd[2018]: time="2025-10-13T00:01:24.204209546Z" level=info msg="CreateContainer within sandbox \"475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:01:24.260352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257094339.mount: Deactivated successfully. Oct 13 00:01:24.265155 containerd[2018]: time="2025-10-13T00:01:24.265065987Z" level=info msg="Container 72ed208693d9ec66c10b53870139bb64a6c4fa6a9aa492caa6d2b4d8297dba78: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:24.281175 containerd[2018]: time="2025-10-13T00:01:24.280895271Z" level=info msg="CreateContainer within sandbox \"475d7f45350c6e3400b1b3bea91b1272afd52e7878abe885ed3d3c807705e0cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72ed208693d9ec66c10b53870139bb64a6c4fa6a9aa492caa6d2b4d8297dba78\"" Oct 13 00:01:24.283448 containerd[2018]: time="2025-10-13T00:01:24.283388355Z" level=info msg="StartContainer for \"72ed208693d9ec66c10b53870139bb64a6c4fa6a9aa492caa6d2b4d8297dba78\"" Oct 13 00:01:24.286706 containerd[2018]: time="2025-10-13T00:01:24.286600731Z" level=info msg="connecting to shim 72ed208693d9ec66c10b53870139bb64a6c4fa6a9aa492caa6d2b4d8297dba78" address="unix:///run/containerd/s/5b1e3a13b986a578c931964069a76a796185e4e1704b26140ec97803f81e5489" protocol=ttrpc version=3 Oct 13 00:01:24.334301 systemd[1]: Started cri-containerd-72ed208693d9ec66c10b53870139bb64a6c4fa6a9aa492caa6d2b4d8297dba78.scope - libcontainer container 72ed208693d9ec66c10b53870139bb64a6c4fa6a9aa492caa6d2b4d8297dba78. Oct 13 00:01:24.425433 containerd[2018]: time="2025-10-13T00:01:24.425356312Z" level=info msg="StartContainer for \"72ed208693d9ec66c10b53870139bb64a6c4fa6a9aa492caa6d2b4d8297dba78\" returns successfully" Oct 13 00:01:24.444116 containerd[2018]: time="2025-10-13T00:01:24.443969332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-pp9j2,Uid:04c57b42-e794-43a6-9970-4992816d2fff,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:24.449803 containerd[2018]: time="2025-10-13T00:01:24.449736544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-k27mm,Uid:55154b5a-564e-4426-8465-982b6288d007,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:24.453489 systemd-networkd[1898]: vxlan.calico: Gained IPv6LL Oct 13 00:01:24.772480 systemd-networkd[1898]: cali00c6a11f6c5: Gained IPv6LL Oct 13 00:01:25.315854 systemd-networkd[1898]: cali837cae1fa57: Link UP Oct 13 00:01:25.317159 systemd-networkd[1898]: cali837cae1fa57: Gained carrier Oct 13 00:01:25.374632 kubelet[3330]: I1013 00:01:25.374514 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gbf22" podStartSLOduration=91.374490112 podStartE2EDuration="1m31.374490112s" podCreationTimestamp="2025-10-12 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:01:25.137576487 +0000 UTC m=+96.088867306" watchObservedRunningTime="2025-10-13 00:01:25.374490112 +0000 UTC m=+96.325780931" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.686 [INFO][5289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0 calico-apiserver-66885c755- calico-apiserver 55154b5a-564e-4426-8465-982b6288d007 960 0 2025-10-13 00:00:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66885c755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-230 calico-apiserver-66885c755-k27mm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali837cae1fa57 [] [] }} ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.686 [INFO][5289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.819 [INFO][5316] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" HandleID="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Workload="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.819 [INFO][5316] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" HandleID="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Workload="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-230", "pod":"calico-apiserver-66885c755-k27mm", "timestamp":"2025-10-13 00:01:24.819050045 +0000 UTC"}, Hostname:"ip-172-31-31-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.819 [INFO][5316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.819 [INFO][5316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.819 [INFO][5316] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-230' Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:24.894 [INFO][5316] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.104 [INFO][5316] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.232 [INFO][5316] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.252 [INFO][5316] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.272 [INFO][5316] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.272 [INFO][5316] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.276 [INFO][5316] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.290 [INFO][5316] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.305 [INFO][5316] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.5.195/26] block=192.168.5.192/26 handle="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.305 [INFO][5316] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.195/26] handle="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" host="ip-172-31-31-230" Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.305 [INFO][5316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:25.389468 containerd[2018]: 2025-10-13 00:01:25.305 [INFO][5316] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.195/26] IPv6=[] ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" HandleID="k8s-pod-network.b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Workload="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" Oct 13 00:01:25.392582 containerd[2018]: 2025-10-13 00:01:25.309 [INFO][5289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0", GenerateName:"calico-apiserver-66885c755-", Namespace:"calico-apiserver", SelfLink:"", UID:"55154b5a-564e-4426-8465-982b6288d007", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66885c755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"", Pod:"calico-apiserver-66885c755-k27mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali837cae1fa57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:25.392582 containerd[2018]: 2025-10-13 00:01:25.310 [INFO][5289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.195/32] ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" Oct 13 00:01:25.392582 containerd[2018]: 2025-10-13 00:01:25.310 [INFO][5289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali837cae1fa57 ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" Oct 13 00:01:25.392582 containerd[2018]: 2025-10-13 00:01:25.318 [INFO][5289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" Oct 13 00:01:25.392582 containerd[2018]: 2025-10-13 00:01:25.321 [INFO][5289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0", GenerateName:"calico-apiserver-66885c755-", Namespace:"calico-apiserver", SelfLink:"", UID:"55154b5a-564e-4426-8465-982b6288d007", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66885c755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a", Pod:"calico-apiserver-66885c755-k27mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali837cae1fa57", MAC:"ce:01:5a:3b:bc:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:25.392582 containerd[2018]: 2025-10-13 00:01:25.384 [INFO][5289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-k27mm" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--k27mm-eth0" Oct 13 00:01:25.535753 containerd[2018]: time="2025-10-13T00:01:25.533952893Z" level=info msg="connecting to shim b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a" address="unix:///run/containerd/s/51b036eee77f9ed5285f25b9c08c7912478f581da10b75953e7151c591f3d7bb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:25.590468 systemd-networkd[1898]: cali723fa1ea9f1: Link UP Oct 13 00:01:25.593597 systemd-networkd[1898]: cali723fa1ea9f1: Gained carrier Oct 13 00:01:25.672527 systemd-networkd[1898]: cali07485853508: Gained IPv6LL Oct 13 00:01:25.673434 systemd[1]: Started cri-containerd-b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a.scope - libcontainer container b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a. Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:24.740 [INFO][5285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0 goldmane-854f97d977- calico-system 04c57b42-e794-43a6-9970-4992816d2fff 949 0 2025-10-13 00:00:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-230 goldmane-854f97d977-pp9j2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali723fa1ea9f1 [] [] }} ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:24.742 [INFO][5285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:24.925 [INFO][5326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" HandleID="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Workload="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:24.925 [INFO][5326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" HandleID="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Workload="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-230", "pod":"goldmane-854f97d977-pp9j2", "timestamp":"2025-10-13 00:01:24.92542293 +0000 UTC"}, Hostname:"ip-172-31-31-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:24.925 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.305 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.305 [INFO][5326] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-230' Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.392 [INFO][5326] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.408 [INFO][5326] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.418 [INFO][5326] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.421 [INFO][5326] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.426 [INFO][5326] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.428 [INFO][5326] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.432 [INFO][5326] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.490 [INFO][5326] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.553 [INFO][5326] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.5.196/26] block=192.168.5.192/26 handle="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.553 [INFO][5326] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.196/26] handle="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" host="ip-172-31-31-230" Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.553 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:25.702239 containerd[2018]: 2025-10-13 00:01:25.553 [INFO][5326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.196/26] IPv6=[] ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" HandleID="k8s-pod-network.af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Workload="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" Oct 13 00:01:25.706494 containerd[2018]: 2025-10-13 00:01:25.574 [INFO][5285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"04c57b42-e794-43a6-9970-4992816d2fff", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"", Pod:"goldmane-854f97d977-pp9j2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali723fa1ea9f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:25.706494 containerd[2018]: 2025-10-13 00:01:25.574 [INFO][5285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.196/32] ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" Oct 13 00:01:25.706494 containerd[2018]: 2025-10-13 00:01:25.574 [INFO][5285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali723fa1ea9f1 ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" Oct 13 00:01:25.706494 containerd[2018]: 2025-10-13 00:01:25.607 [INFO][5285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" Oct 13 00:01:25.706494 containerd[2018]: 2025-10-13 00:01:25.611 [INFO][5285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"04c57b42-e794-43a6-9970-4992816d2fff", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc", Pod:"goldmane-854f97d977-pp9j2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali723fa1ea9f1", MAC:"12:ac:82:16:9b:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:25.706494 containerd[2018]: 2025-10-13 00:01:25.688 [INFO][5285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" Namespace="calico-system" Pod="goldmane-854f97d977-pp9j2" WorkloadEndpoint="ip--172--31--31--230-k8s-goldmane--854f97d977--pp9j2-eth0" Oct 13 00:01:25.803753 containerd[2018]: time="2025-10-13T00:01:25.803660358Z" level=info msg="connecting to shim af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc" address="unix:///run/containerd/s/ab20d804ce6f0cd032379aeb5c2718a5130661661e951b4461289725c734e9d6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:26.060063 systemd[1]: Started cri-containerd-af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc.scope - libcontainer container af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc. Oct 13 00:01:26.443436 containerd[2018]: time="2025-10-13T00:01:26.443374794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-gvbpr,Uid:355ad4db-b664-4420-b21a-f0cbc5ec56fc,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:01:26.609837 containerd[2018]: time="2025-10-13T00:01:26.608852274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-k27mm,Uid:55154b5a-564e-4426-8465-982b6288d007,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a\"" Oct 13 00:01:26.767278 containerd[2018]: time="2025-10-13T00:01:26.766909651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-pp9j2,Uid:04c57b42-e794-43a6-9970-4992816d2fff,Namespace:calico-system,Attempt:0,} returns sandbox id \"af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc\"" Oct 13 00:01:27.070472 systemd-networkd[1898]: cali557203630d1: Link UP Oct 13 00:01:27.075416 systemd-networkd[1898]: cali557203630d1: Gained carrier Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.712 [INFO][5445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0 calico-apiserver-66885c755- calico-apiserver 355ad4db-b664-4420-b21a-f0cbc5ec56fc 968 0 2025-10-13 00:00:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66885c755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-230 calico-apiserver-66885c755-gvbpr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali557203630d1 [] [] }} ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.712 [INFO][5445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.888 [INFO][5474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" HandleID="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Workload="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.890 [INFO][5474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" HandleID="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Workload="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000398b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-230", "pod":"calico-apiserver-66885c755-gvbpr", "timestamp":"2025-10-13 00:01:26.887945384 +0000 UTC"}, Hostname:"ip-172-31-31-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.891 [INFO][5474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.891 [INFO][5474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.891 [INFO][5474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-230' Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.918 [INFO][5474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.933 [INFO][5474] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.952 [INFO][5474] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.961 [INFO][5474] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.971 [INFO][5474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.972 [INFO][5474] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:26.983 [INFO][5474] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58 Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:27.005 [INFO][5474] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:27.023 [INFO][5474] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.5.197/26] block=192.168.5.192/26 handle="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:27.023 [INFO][5474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.197/26] handle="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" host="ip-172-31-31-230" Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:27.023 [INFO][5474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:27.178850 containerd[2018]: 2025-10-13 00:01:27.023 [INFO][5474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.197/26] IPv6=[] ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" HandleID="k8s-pod-network.0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Workload="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" Oct 13 00:01:27.180632 containerd[2018]: 2025-10-13 00:01:27.046 [INFO][5445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0", GenerateName:"calico-apiserver-66885c755-", Namespace:"calico-apiserver", SelfLink:"", UID:"355ad4db-b664-4420-b21a-f0cbc5ec56fc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66885c755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"", Pod:"calico-apiserver-66885c755-gvbpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali557203630d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:27.180632 containerd[2018]: 2025-10-13 00:01:27.047 [INFO][5445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.197/32] ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" Oct 13 00:01:27.180632 containerd[2018]: 2025-10-13 00:01:27.047 [INFO][5445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali557203630d1 ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" Oct 13 00:01:27.180632 containerd[2018]: 2025-10-13 00:01:27.081 [INFO][5445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" Oct 13 00:01:27.180632 containerd[2018]: 2025-10-13 00:01:27.084 [INFO][5445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0", GenerateName:"calico-apiserver-66885c755-", Namespace:"calico-apiserver", SelfLink:"", UID:"355ad4db-b664-4420-b21a-f0cbc5ec56fc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66885c755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58", Pod:"calico-apiserver-66885c755-gvbpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali557203630d1", MAC:"e2:87:08:6f:b7:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:27.180632 containerd[2018]: 2025-10-13 00:01:27.157 [INFO][5445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" Namespace="calico-apiserver" Pod="calico-apiserver-66885c755-gvbpr" WorkloadEndpoint="ip--172--31--31--230-k8s-calico--apiserver--66885c755--gvbpr-eth0" Oct 13 00:01:27.203956 systemd-networkd[1898]: cali837cae1fa57: Gained IPv6LL Oct 13 00:01:27.319808 containerd[2018]: time="2025-10-13T00:01:27.319083150Z" level=info msg="connecting to shim 0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58" address="unix:///run/containerd/s/2ac909b533a463c358b2bb6eac93006e88d5b0cbe93fb37d434e873b48621a0b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:27.333828 systemd-networkd[1898]: cali723fa1ea9f1: Gained IPv6LL Oct 13 00:01:27.443012 systemd[1]: Started cri-containerd-0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58.scope - libcontainer container 0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58. Oct 13 00:01:27.739894 systemd[1]: Started sshd@14-172.31.31.230:22-139.178.89.65:43422.service - OpenSSH per-connection server daemon (139.178.89.65:43422). Oct 13 00:01:28.018660 sshd[5531]: Accepted publickey for core from 139.178.89.65 port 43422 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:28.024450 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:28.044565 systemd-logind[1987]: New session 15 of user core. Oct 13 00:01:28.051152 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 00:01:28.062752 containerd[2018]: time="2025-10-13T00:01:28.062632374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66885c755-gvbpr,Uid:355ad4db-b664-4420-b21a-f0cbc5ec56fc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58\"" Oct 13 00:01:28.445094 containerd[2018]: time="2025-10-13T00:01:28.442709059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6q65z,Uid:422228b7-7530-44f9-8a2d-a1df9e389906,Namespace:kube-system,Attempt:0,}" Oct 13 00:01:28.506527 sshd[5540]: Connection closed by 139.178.89.65 port 43422 Oct 13 00:01:28.506240 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:28.530896 systemd[1]: sshd@14-172.31.31.230:22-139.178.89.65:43422.service: Deactivated successfully. Oct 13 00:01:28.540603 systemd-logind[1987]: Session 15 logged out. Waiting for processes to exit. Oct 13 00:01:28.543069 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 00:01:28.555062 systemd-logind[1987]: Removed session 15. Oct 13 00:01:28.740027 systemd-networkd[1898]: cali557203630d1: Gained IPv6LL Oct 13 00:01:28.894691 systemd-networkd[1898]: cali82bf8bc674d: Link UP Oct 13 00:01:28.896064 systemd-networkd[1898]: cali82bf8bc674d: Gained carrier Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.653 [INFO][5548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0 coredns-66bc5c9577- kube-system 422228b7-7530-44f9-8a2d-a1df9e389906 940 0 2025-10-12 23:59:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-230 coredns-66bc5c9577-6q65z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali82bf8bc674d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.653 [INFO][5548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.734 [INFO][5564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" HandleID="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Workload="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.734 [INFO][5564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" HandleID="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Workload="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3890), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-230", "pod":"coredns-66bc5c9577-6q65z", "timestamp":"2025-10-13 00:01:28.734408517 +0000 UTC"}, Hostname:"ip-172-31-31-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.735 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.735 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.735 [INFO][5564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-230' Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.773 [INFO][5564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.794 [INFO][5564] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.814 [INFO][5564] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.821 [INFO][5564] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.832 [INFO][5564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.833 [INFO][5564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.840 [INFO][5564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38 Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.857 [INFO][5564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.877 [INFO][5564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.5.198/26] block=192.168.5.192/26 handle="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.877 [INFO][5564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.198/26] handle="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" host="ip-172-31-31-230" Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.878 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:28.950689 containerd[2018]: 2025-10-13 00:01:28.878 [INFO][5564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.198/26] IPv6=[] ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" HandleID="k8s-pod-network.ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Workload="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" Oct 13 00:01:28.952364 containerd[2018]: 2025-10-13 00:01:28.886 [INFO][5548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"422228b7-7530-44f9-8a2d-a1df9e389906", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.October, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"", Pod:"coredns-66bc5c9577-6q65z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82bf8bc674d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:28.952364 containerd[2018]: 2025-10-13 00:01:28.888 [INFO][5548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.198/32] ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" Oct 13 00:01:28.952364 containerd[2018]: 2025-10-13 00:01:28.888 [INFO][5548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82bf8bc674d ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" Oct 13 00:01:28.952364 containerd[2018]: 2025-10-13 00:01:28.897 [INFO][5548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" Oct 13 00:01:28.952364 containerd[2018]: 2025-10-13 00:01:28.897 [INFO][5548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"422228b7-7530-44f9-8a2d-a1df9e389906", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.October, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38", Pod:"coredns-66bc5c9577-6q65z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82bf8bc674d", MAC:"16:fd:b7:be:e9:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:28.952364 containerd[2018]: 2025-10-13 00:01:28.920 [INFO][5548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" Namespace="kube-system" Pod="coredns-66bc5c9577-6q65z" WorkloadEndpoint="ip--172--31--31--230-k8s-coredns--66bc5c9577--6q65z-eth0" Oct 13 00:01:29.049577 containerd[2018]: time="2025-10-13T00:01:29.049504374Z" level=info msg="connecting to shim ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38" address="unix:///run/containerd/s/2e4674e34cef82aec22573d9e1e40921999a6bff3b320cb211f298599abdfc55" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:29.154197 systemd[1]: Started cri-containerd-ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38.scope - libcontainer container ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38. Oct 13 00:01:29.231810 containerd[2018]: time="2025-10-13T00:01:29.231330271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:29.233616 containerd[2018]: time="2025-10-13T00:01:29.233571187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 00:01:29.237674 containerd[2018]: time="2025-10-13T00:01:29.237582763Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:29.247329 containerd[2018]: time="2025-10-13T00:01:29.247139851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:29.249204 containerd[2018]: time="2025-10-13T00:01:29.249150271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 5.715682852s" Oct 13 00:01:29.249419 containerd[2018]: time="2025-10-13T00:01:29.249388531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 00:01:29.254601 containerd[2018]: time="2025-10-13T00:01:29.253926236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:01:29.295753 containerd[2018]: time="2025-10-13T00:01:29.295096796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6q65z,Uid:422228b7-7530-44f9-8a2d-a1df9e389906,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38\"" Oct 13 00:01:29.305054 containerd[2018]: time="2025-10-13T00:01:29.304908116Z" level=info msg="CreateContainer within sandbox \"215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 00:01:29.308363 containerd[2018]: time="2025-10-13T00:01:29.308297624Z" level=info msg="CreateContainer within sandbox \"ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:01:29.326350 containerd[2018]: time="2025-10-13T00:01:29.324756848Z" level=info msg="Container dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:29.353251 containerd[2018]: time="2025-10-13T00:01:29.352760036Z" level=info msg="CreateContainer within sandbox \"215f2ba5c106044b00b9982960bc1dbbe5a43e8d91af957a007622838eee76f2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\"" Oct 13 00:01:29.356911 containerd[2018]: time="2025-10-13T00:01:29.356663120Z" level=info msg="StartContainer for \"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\"" Oct 13 00:01:29.363603 containerd[2018]: time="2025-10-13T00:01:29.362069492Z" level=info msg="Container 1cc24d787fe9bb368931667d595043b696a0ac249555bd46f2c62295e452ef7b: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:29.363603 containerd[2018]: time="2025-10-13T00:01:29.362521568Z" level=info msg="connecting to shim dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27" address="unix:///run/containerd/s/9d87840f7dc00ca3f27f5bf007313e12ab8c7160c4291f042a5d995bd98370eb" protocol=ttrpc version=3 Oct 13 00:01:29.382409 containerd[2018]: time="2025-10-13T00:01:29.382299056Z" level=info msg="CreateContainer within sandbox \"ab5a8814a83a38aa1dec53c6e68dc3e2a69f588ab5c7ea727c74575a0d46bd38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cc24d787fe9bb368931667d595043b696a0ac249555bd46f2c62295e452ef7b\"" Oct 13 00:01:29.385876 containerd[2018]: time="2025-10-13T00:01:29.384984164Z" level=info msg="StartContainer for \"1cc24d787fe9bb368931667d595043b696a0ac249555bd46f2c62295e452ef7b\"" Oct 13 00:01:29.392746 containerd[2018]: time="2025-10-13T00:01:29.392643584Z" level=info msg="connecting to shim 1cc24d787fe9bb368931667d595043b696a0ac249555bd46f2c62295e452ef7b" address="unix:///run/containerd/s/2e4674e34cef82aec22573d9e1e40921999a6bff3b320cb211f298599abdfc55" protocol=ttrpc version=3 Oct 13 00:01:29.414344 systemd[1]: Started cri-containerd-dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27.scope - libcontainer container dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27. Oct 13 00:01:29.442977 systemd[1]: Started cri-containerd-1cc24d787fe9bb368931667d595043b696a0ac249555bd46f2c62295e452ef7b.scope - libcontainer container 1cc24d787fe9bb368931667d595043b696a0ac249555bd46f2c62295e452ef7b. Oct 13 00:01:29.468063 containerd[2018]: time="2025-10-13T00:01:29.467873781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6q2t,Uid:e00ccdfa-8874-41a5-8202-ed22b850ea32,Namespace:calico-system,Attempt:0,}" Oct 13 00:01:29.604010 containerd[2018]: time="2025-10-13T00:01:29.603831981Z" level=info msg="StartContainer for \"1cc24d787fe9bb368931667d595043b696a0ac249555bd46f2c62295e452ef7b\" returns successfully" Oct 13 00:01:29.703327 containerd[2018]: time="2025-10-13T00:01:29.703255762Z" level=info msg="StartContainer for \"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\" returns successfully" Oct 13 00:01:29.863545 systemd-networkd[1898]: cali3dd8c2eb07e: Link UP Oct 13 00:01:29.865467 systemd-networkd[1898]: cali3dd8c2eb07e: Gained carrier Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.653 [INFO][5673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0 csi-node-driver- calico-system e00ccdfa-8874-41a5-8202-ed22b850ea32 720 0 2025-10-13 00:00:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-230 csi-node-driver-h6q2t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3dd8c2eb07e [] [] }} ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.653 [INFO][5673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.764 [INFO][5703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" HandleID="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Workload="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.764 [INFO][5703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" HandleID="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Workload="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-230", "pod":"csi-node-driver-h6q2t", "timestamp":"2025-10-13 00:01:29.764664094 +0000 UTC"}, Hostname:"ip-172-31-31-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.765 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.765 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.765 [INFO][5703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-230' Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.805 [INFO][5703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.814 [INFO][5703] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.822 [INFO][5703] ipam/ipam.go 511: Trying affinity for 192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.825 [INFO][5703] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.830 [INFO][5703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.192/26 host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.830 [INFO][5703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.5.192/26 handle="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.834 [INFO][5703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.842 [INFO][5703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.5.192/26 handle="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.853 [INFO][5703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.5.199/26] block=192.168.5.192/26 handle="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.854 [INFO][5703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.199/26] handle="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" host="ip-172-31-31-230" Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.854 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:01:29.909187 containerd[2018]: 2025-10-13 00:01:29.854 [INFO][5703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.199/26] IPv6=[] ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" HandleID="k8s-pod-network.03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Workload="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" Oct 13 00:01:29.912532 containerd[2018]: 2025-10-13 00:01:29.859 [INFO][5673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e00ccdfa-8874-41a5-8202-ed22b850ea32", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"", Pod:"csi-node-driver-h6q2t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dd8c2eb07e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:29.912532 containerd[2018]: 2025-10-13 00:01:29.859 [INFO][5673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.199/32] ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" Oct 13 00:01:29.912532 containerd[2018]: 2025-10-13 00:01:29.859 [INFO][5673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dd8c2eb07e ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" Oct 13 00:01:29.912532 containerd[2018]: 2025-10-13 00:01:29.867 [INFO][5673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" Oct 13 00:01:29.912532 containerd[2018]: 2025-10-13 00:01:29.869 [INFO][5673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e00ccdfa-8874-41a5-8202-ed22b850ea32", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-230", ContainerID:"03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d", Pod:"csi-node-driver-h6q2t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dd8c2eb07e", MAC:"c6:14:b1:dd:82:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:01:29.912532 containerd[2018]: 2025-10-13 00:01:29.904 [INFO][5673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" Namespace="calico-system" Pod="csi-node-driver-h6q2t" WorkloadEndpoint="ip--172--31--31--230-k8s-csi--node--driver--h6q2t-eth0" Oct 13 00:01:29.973206 containerd[2018]: time="2025-10-13T00:01:29.972803783Z" level=info msg="connecting to shim 03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d" address="unix:///run/containerd/s/62d24e230a983062d6e507106dc5a78980ebf7fca490b7b855384e0ed38045a2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:01:30.060845 systemd[1]: Started cri-containerd-03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d.scope - libcontainer container 03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d. Oct 13 00:01:30.109753 kubelet[3330]: I1013 00:01:30.108236 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64d6998867-blpss" podStartSLOduration=62.38203054 podStartE2EDuration="1m8.108214196s" podCreationTimestamp="2025-10-13 00:00:22 +0000 UTC" firstStartedPulling="2025-10-13 00:01:23.527637435 +0000 UTC m=+94.478928230" lastFinishedPulling="2025-10-13 00:01:29.253821079 +0000 UTC m=+100.205111886" observedRunningTime="2025-10-13 00:01:30.107029928 +0000 UTC m=+101.058320747" watchObservedRunningTime="2025-10-13 00:01:30.108214196 +0000 UTC m=+101.059504991" Oct 13 00:01:30.145774 kubelet[3330]: I1013 00:01:30.145429 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6q65z" podStartSLOduration=96.145404104 podStartE2EDuration="1m36.145404104s" podCreationTimestamp="2025-10-12 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:01:30.142621316 +0000 UTC m=+101.093912123" watchObservedRunningTime="2025-10-13 00:01:30.145404104 +0000 UTC m=+101.096694899" Oct 13 00:01:30.238919 containerd[2018]: time="2025-10-13T00:01:30.238765532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6q2t,Uid:e00ccdfa-8874-41a5-8202-ed22b850ea32,Namespace:calico-system,Attempt:0,} returns sandbox id \"03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d\"" Oct 13 00:01:30.351976 containerd[2018]: time="2025-10-13T00:01:30.351918489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\" id:\"0791cf3a311e1188ae51656fef616bb3888607545182d2a0bf2a7f88457679c4\" pid:5784 exit_status:1 exited_at:{seconds:1760313690 nanos:349798737}" Oct 13 00:01:30.596375 systemd-networkd[1898]: cali82bf8bc674d: Gained IPv6LL Oct 13 00:01:31.137685 containerd[2018]: time="2025-10-13T00:01:31.137579733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\" id:\"e02683f773365aca17f2fae1578bdee9c1e29744cbcaa497bc0027ef56a1242a\" pid:5819 exited_at:{seconds:1760313691 nanos:137223537}" Oct 13 00:01:31.619952 systemd-networkd[1898]: cali3dd8c2eb07e: Gained IPv6LL Oct 13 00:01:33.546904 systemd[1]: Started sshd@15-172.31.31.230:22-139.178.89.65:54138.service - OpenSSH per-connection server daemon (139.178.89.65:54138). Oct 13 00:01:33.748384 sshd[5829]: Accepted publickey for core from 139.178.89.65 port 54138 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:33.751973 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:33.760329 systemd-logind[1987]: New session 16 of user core. Oct 13 00:01:33.768218 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 00:01:34.024814 sshd[5832]: Connection closed by 139.178.89.65 port 54138 Oct 13 00:01:34.024563 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:34.032460 systemd[1]: sshd@15-172.31.31.230:22-139.178.89.65:54138.service: Deactivated successfully. Oct 13 00:01:34.035867 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 00:01:34.038378 systemd-logind[1987]: Session 16 logged out. Waiting for processes to exit. Oct 13 00:01:34.042233 systemd-logind[1987]: Removed session 16. Oct 13 00:01:34.156970 ntpd[2201]: Listen normally on 6 vxlan.calico 192.168.5.192:123 Oct 13 00:01:34.157101 ntpd[2201]: Listen normally on 7 vxlan.calico [fe80::6424:9eff:fe85:a9dc%4]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 6 vxlan.calico 192.168.5.192:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 7 vxlan.calico [fe80::6424:9eff:fe85:a9dc%4]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 8 cali00c6a11f6c5 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 9 cali07485853508 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 10 cali837cae1fa57 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 11 cali723fa1ea9f1 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 12 cali557203630d1 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 13 cali82bf8bc674d [fe80::ecee:eeff:feee:eeee%12]:123 Oct 13 00:01:34.157615 ntpd[2201]: 13 Oct 00:01:34 ntpd[2201]: Listen normally on 14 cali3dd8c2eb07e [fe80::ecee:eeff:feee:eeee%13]:123 Oct 13 00:01:34.157151 ntpd[2201]: Listen normally on 8 cali00c6a11f6c5 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 13 00:01:34.157196 ntpd[2201]: Listen normally on 9 cali07485853508 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 13 00:01:34.157239 ntpd[2201]: Listen normally on 10 cali837cae1fa57 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 13 00:01:34.157282 ntpd[2201]: Listen normally on 11 cali723fa1ea9f1 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 13 00:01:34.157326 ntpd[2201]: Listen normally on 12 cali557203630d1 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 13 00:01:34.157395 ntpd[2201]: Listen normally on 13 cali82bf8bc674d [fe80::ecee:eeff:feee:eeee%12]:123 Oct 13 00:01:34.157449 ntpd[2201]: Listen normally on 14 cali3dd8c2eb07e [fe80::ecee:eeff:feee:eeee%13]:123 Oct 13 00:01:39.039853 containerd[2018]: time="2025-10-13T00:01:39.039786988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:39.041434 containerd[2018]: time="2025-10-13T00:01:39.041207200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 00:01:39.042835 containerd[2018]: time="2025-10-13T00:01:39.042764608Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:39.048189 containerd[2018]: time="2025-10-13T00:01:39.048133180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:39.049816 containerd[2018]: time="2025-10-13T00:01:39.049288348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 9.79530492s" Oct 13 00:01:39.049816 containerd[2018]: time="2025-10-13T00:01:39.049356064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:01:39.052141 containerd[2018]: time="2025-10-13T00:01:39.052080580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 00:01:39.062921 containerd[2018]: time="2025-10-13T00:01:39.062854744Z" level=info msg="CreateContainer within sandbox \"b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:01:39.064426 systemd[1]: Started sshd@16-172.31.31.230:22-139.178.89.65:54142.service - OpenSSH per-connection server daemon (139.178.89.65:54142). Oct 13 00:01:39.086267 containerd[2018]: time="2025-10-13T00:01:39.086006164Z" level=info msg="Container 9742b34bac699546b8abb244bf1eeee5faa829862de0d79bb4aac49f10959311: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:39.119295 containerd[2018]: time="2025-10-13T00:01:39.119007077Z" level=info msg="CreateContainer within sandbox \"b47dfc73885cde053170b83a67f32f667534ca480cb550ac10501c2af3960a1a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9742b34bac699546b8abb244bf1eeee5faa829862de0d79bb4aac49f10959311\"" Oct 13 00:01:39.123425 containerd[2018]: time="2025-10-13T00:01:39.123372773Z" level=info msg="StartContainer for \"9742b34bac699546b8abb244bf1eeee5faa829862de0d79bb4aac49f10959311\"" Oct 13 00:01:39.128253 containerd[2018]: time="2025-10-13T00:01:39.128198261Z" level=info msg="connecting to shim 9742b34bac699546b8abb244bf1eeee5faa829862de0d79bb4aac49f10959311" address="unix:///run/containerd/s/51b036eee77f9ed5285f25b9c08c7912478f581da10b75953e7151c591f3d7bb" protocol=ttrpc version=3 Oct 13 00:01:39.179064 systemd[1]: Started cri-containerd-9742b34bac699546b8abb244bf1eeee5faa829862de0d79bb4aac49f10959311.scope - libcontainer container 9742b34bac699546b8abb244bf1eeee5faa829862de0d79bb4aac49f10959311. Oct 13 00:01:39.279261 containerd[2018]: time="2025-10-13T00:01:39.279206609Z" level=info msg="StartContainer for \"9742b34bac699546b8abb244bf1eeee5faa829862de0d79bb4aac49f10959311\" returns successfully" Oct 13 00:01:39.301896 sshd[5861]: Accepted publickey for core from 139.178.89.65 port 54142 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:39.305910 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:39.319471 systemd-logind[1987]: New session 17 of user core. Oct 13 00:01:39.326061 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 00:01:39.701936 sshd[5893]: Connection closed by 139.178.89.65 port 54142 Oct 13 00:01:39.701630 sshd-session[5861]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:39.711169 systemd[1]: sshd@16-172.31.31.230:22-139.178.89.65:54142.service: Deactivated successfully. Oct 13 00:01:39.717510 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 00:01:39.720576 systemd-logind[1987]: Session 17 logged out. Waiting for processes to exit. Oct 13 00:01:39.723537 systemd-logind[1987]: Removed session 17. Oct 13 00:01:42.121122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318436488.mount: Deactivated successfully. Oct 13 00:01:42.127073 kubelet[3330]: I1013 00:01:42.127017 3330 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:01:43.257465 containerd[2018]: time="2025-10-13T00:01:43.256985061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:43.259958 containerd[2018]: time="2025-10-13T00:01:43.259903677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 00:01:43.260915 containerd[2018]: time="2025-10-13T00:01:43.260843733Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:43.265539 containerd[2018]: time="2025-10-13T00:01:43.265459485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:43.268056 containerd[2018]: time="2025-10-13T00:01:43.267880881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.215737817s" Oct 13 00:01:43.268056 containerd[2018]: time="2025-10-13T00:01:43.267932709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 00:01:43.271866 containerd[2018]: time="2025-10-13T00:01:43.271751601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:01:43.279746 containerd[2018]: time="2025-10-13T00:01:43.278696409Z" level=info msg="CreateContainer within sandbox \"af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 00:01:43.292063 containerd[2018]: time="2025-10-13T00:01:43.291992709Z" level=info msg="Container db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:43.314857 containerd[2018]: time="2025-10-13T00:01:43.313450377Z" level=info msg="CreateContainer within sandbox \"af4684eb1a113f4da0814f16ae7802e745a4761c55bce62df0d748683c0070cc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\"" Oct 13 00:01:43.316594 containerd[2018]: time="2025-10-13T00:01:43.316517289Z" level=info msg="StartContainer for \"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\"" Oct 13 00:01:43.320801 containerd[2018]: time="2025-10-13T00:01:43.320639985Z" level=info msg="connecting to shim db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d" address="unix:///run/containerd/s/ab20d804ce6f0cd032379aeb5c2718a5130661661e951b4461289725c734e9d6" protocol=ttrpc version=3 Oct 13 00:01:43.388400 systemd[1]: Started cri-containerd-db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d.scope - libcontainer container db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d. Oct 13 00:01:43.587292 containerd[2018]: time="2025-10-13T00:01:43.586997987Z" level=info msg="StartContainer for \"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\" returns successfully" Oct 13 00:01:43.651163 containerd[2018]: time="2025-10-13T00:01:43.650406551Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:43.652625 containerd[2018]: time="2025-10-13T00:01:43.652405991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 00:01:43.661749 containerd[2018]: time="2025-10-13T00:01:43.660052007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 388.23515ms" Oct 13 00:01:43.662145 containerd[2018]: time="2025-10-13T00:01:43.661968875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:01:43.666581 containerd[2018]: time="2025-10-13T00:01:43.666258143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 00:01:43.671351 containerd[2018]: time="2025-10-13T00:01:43.671289155Z" level=info msg="CreateContainer within sandbox \"0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:01:43.692740 containerd[2018]: time="2025-10-13T00:01:43.692123327Z" level=info msg="Container 10a09e21ff65e33552511c72fae4baec5541e3598852895d0e1ebbc5392a146a: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:43.738944 containerd[2018]: time="2025-10-13T00:01:43.737840231Z" level=info msg="CreateContainer within sandbox \"0e283d2d34d88b3d293b5d5088381f3f2d51490c9ea010f54d10ef3100c57d58\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"10a09e21ff65e33552511c72fae4baec5541e3598852895d0e1ebbc5392a146a\"" Oct 13 00:01:43.741352 containerd[2018]: time="2025-10-13T00:01:43.741288491Z" level=info msg="StartContainer for \"10a09e21ff65e33552511c72fae4baec5541e3598852895d0e1ebbc5392a146a\"" Oct 13 00:01:43.749003 containerd[2018]: time="2025-10-13T00:01:43.747821255Z" level=info msg="connecting to shim 10a09e21ff65e33552511c72fae4baec5541e3598852895d0e1ebbc5392a146a" address="unix:///run/containerd/s/2ac909b533a463c358b2bb6eac93006e88d5b0cbe93fb37d434e873b48621a0b" protocol=ttrpc version=3 Oct 13 00:01:43.813595 systemd[1]: Started cri-containerd-10a09e21ff65e33552511c72fae4baec5541e3598852895d0e1ebbc5392a146a.scope - libcontainer container 10a09e21ff65e33552511c72fae4baec5541e3598852895d0e1ebbc5392a146a. Oct 13 00:01:44.010325 containerd[2018]: time="2025-10-13T00:01:44.009820953Z" level=info msg="StartContainer for \"10a09e21ff65e33552511c72fae4baec5541e3598852895d0e1ebbc5392a146a\" returns successfully" Oct 13 00:01:44.196284 kubelet[3330]: I1013 00:01:44.196179 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66885c755-k27mm" podStartSLOduration=82.76010154 podStartE2EDuration="1m35.196156354s" podCreationTimestamp="2025-10-13 00:00:09 +0000 UTC" firstStartedPulling="2025-10-13 00:01:26.614967138 +0000 UTC m=+97.566257933" lastFinishedPulling="2025-10-13 00:01:39.051021952 +0000 UTC m=+110.002312747" observedRunningTime="2025-10-13 00:01:40.140696142 +0000 UTC m=+111.091986949" watchObservedRunningTime="2025-10-13 00:01:44.196156354 +0000 UTC m=+115.147447137" Oct 13 00:01:44.251213 kubelet[3330]: I1013 00:01:44.251121 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-pp9j2" podStartSLOduration=65.755583392 podStartE2EDuration="1m22.251099314s" podCreationTimestamp="2025-10-13 00:00:22 +0000 UTC" firstStartedPulling="2025-10-13 00:01:26.774669175 +0000 UTC m=+97.725959958" lastFinishedPulling="2025-10-13 00:01:43.270185085 +0000 UTC m=+114.221475880" observedRunningTime="2025-10-13 00:01:44.197542822 +0000 UTC m=+115.148833617" watchObservedRunningTime="2025-10-13 00:01:44.251099314 +0000 UTC m=+115.202390109" Oct 13 00:01:44.762805 systemd[1]: Started sshd@17-172.31.31.230:22-139.178.89.65:38078.service - OpenSSH per-connection server daemon (139.178.89.65:38078). Oct 13 00:01:44.846972 containerd[2018]: time="2025-10-13T00:01:44.844269793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\" id:\"7abb39715e9c01eecfd388710cf600e8c090418850542e131835431693cf7088\" pid:6006 exit_status:1 exited_at:{seconds:1760313704 nanos:837071473}" Oct 13 00:01:45.024613 sshd[6030]: Accepted publickey for core from 139.178.89.65 port 38078 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:45.029855 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:45.042975 systemd-logind[1987]: New session 18 of user core. Oct 13 00:01:45.052116 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 00:01:45.299788 kubelet[3330]: I1013 00:01:45.298373 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66885c755-gvbpr" podStartSLOduration=80.709924006 podStartE2EDuration="1m36.298350983s" podCreationTimestamp="2025-10-13 00:00:09 +0000 UTC" firstStartedPulling="2025-10-13 00:01:28.075557022 +0000 UTC m=+99.026847817" lastFinishedPulling="2025-10-13 00:01:43.663983987 +0000 UTC m=+114.615274794" observedRunningTime="2025-10-13 00:01:44.253526926 +0000 UTC m=+115.204817817" watchObservedRunningTime="2025-10-13 00:01:45.298350983 +0000 UTC m=+116.249641778" Oct 13 00:01:45.323767 containerd[2018]: time="2025-10-13T00:01:45.323649455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:45.326046 containerd[2018]: time="2025-10-13T00:01:45.325908995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 00:01:45.331793 containerd[2018]: time="2025-10-13T00:01:45.330566795Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:45.344646 containerd[2018]: time="2025-10-13T00:01:45.344591135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:45.348750 containerd[2018]: time="2025-10-13T00:01:45.348010595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.681694168s" Oct 13 00:01:45.349815 containerd[2018]: time="2025-10-13T00:01:45.349768031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 00:01:45.366232 containerd[2018]: time="2025-10-13T00:01:45.366184284Z" level=info msg="CreateContainer within sandbox \"03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 00:01:45.408357 containerd[2018]: time="2025-10-13T00:01:45.408301476Z" level=info msg="Container 148dad28af425e5b3454bb8b228d9e7efa76dde504fb2951663e05e996a4cb19: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:45.433611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1938689031.mount: Deactivated successfully. Oct 13 00:01:45.479571 containerd[2018]: time="2025-10-13T00:01:45.478666224Z" level=info msg="CreateContainer within sandbox \"03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"148dad28af425e5b3454bb8b228d9e7efa76dde504fb2951663e05e996a4cb19\"" Oct 13 00:01:45.487274 containerd[2018]: time="2025-10-13T00:01:45.487222560Z" level=info msg="StartContainer for \"148dad28af425e5b3454bb8b228d9e7efa76dde504fb2951663e05e996a4cb19\"" Oct 13 00:01:45.505988 containerd[2018]: time="2025-10-13T00:01:45.504844608Z" level=info msg="connecting to shim 148dad28af425e5b3454bb8b228d9e7efa76dde504fb2951663e05e996a4cb19" address="unix:///run/containerd/s/62d24e230a983062d6e507106dc5a78980ebf7fca490b7b855384e0ed38045a2" protocol=ttrpc version=3 Oct 13 00:01:45.596033 sshd[6037]: Connection closed by 139.178.89.65 port 38078 Oct 13 00:01:45.594779 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:45.602010 systemd[1]: Started cri-containerd-148dad28af425e5b3454bb8b228d9e7efa76dde504fb2951663e05e996a4cb19.scope - libcontainer container 148dad28af425e5b3454bb8b228d9e7efa76dde504fb2951663e05e996a4cb19. Oct 13 00:01:45.644334 systemd[1]: sshd@17-172.31.31.230:22-139.178.89.65:38078.service: Deactivated successfully. Oct 13 00:01:45.660558 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 00:01:45.666989 systemd-logind[1987]: Session 18 logged out. Waiting for processes to exit. Oct 13 00:01:45.676591 systemd[1]: Started sshd@18-172.31.31.230:22-139.178.89.65:38084.service - OpenSSH per-connection server daemon (139.178.89.65:38084). Oct 13 00:01:45.680636 systemd-logind[1987]: Removed session 18. Oct 13 00:01:45.921041 sshd[6088]: Accepted publickey for core from 139.178.89.65 port 38084 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:45.923821 sshd-session[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:45.935614 systemd-logind[1987]: New session 19 of user core. Oct 13 00:01:45.946210 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 00:01:46.039986 containerd[2018]: time="2025-10-13T00:01:46.039905255Z" level=info msg="StartContainer for \"148dad28af425e5b3454bb8b228d9e7efa76dde504fb2951663e05e996a4cb19\" returns successfully" Oct 13 00:01:46.046954 containerd[2018]: time="2025-10-13T00:01:46.046517879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 00:01:46.118156 containerd[2018]: time="2025-10-13T00:01:46.118028759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\" id:\"b6b2604ef64eab7c249f119ebcb4c651da3465f5e1a8ad0d3a48a8afd02db066\" pid:6057 exited_at:{seconds:1760313706 nanos:117457583}" Oct 13 00:01:46.722753 sshd[6096]: Connection closed by 139.178.89.65 port 38084 Oct 13 00:01:46.723568 sshd-session[6088]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:46.734364 systemd[1]: sshd@18-172.31.31.230:22-139.178.89.65:38084.service: Deactivated successfully. Oct 13 00:01:46.742898 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 00:01:46.752057 systemd-logind[1987]: Session 19 logged out. Waiting for processes to exit. Oct 13 00:01:46.769814 systemd[1]: Started sshd@19-172.31.31.230:22-139.178.89.65:38086.service - OpenSSH per-connection server daemon (139.178.89.65:38086). Oct 13 00:01:46.775670 systemd-logind[1987]: Removed session 19. Oct 13 00:01:46.984395 sshd[6117]: Accepted publickey for core from 139.178.89.65 port 38086 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:46.986044 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:47.002555 systemd-logind[1987]: New session 20 of user core. Oct 13 00:01:47.009999 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 00:01:48.019882 containerd[2018]: time="2025-10-13T00:01:48.016871221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:48.021028 containerd[2018]: time="2025-10-13T00:01:48.019950889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 00:01:48.026007 containerd[2018]: time="2025-10-13T00:01:48.025141801Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:48.037758 containerd[2018]: time="2025-10-13T00:01:48.036571417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:01:48.043736 containerd[2018]: time="2025-10-13T00:01:48.041947513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.995365338s" Oct 13 00:01:48.043990 containerd[2018]: time="2025-10-13T00:01:48.043950229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 00:01:48.059010 containerd[2018]: time="2025-10-13T00:01:48.058954129Z" level=info msg="CreateContainer within sandbox \"03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 00:01:48.080374 containerd[2018]: time="2025-10-13T00:01:48.080281597Z" level=info msg="Container 2e92790902790d1eabbd5cb0a5e18b0f403a7e4efb73260efec875678af0c967: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:01:48.102167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729687505.mount: Deactivated successfully. Oct 13 00:01:48.119444 containerd[2018]: time="2025-10-13T00:01:48.119363281Z" level=info msg="CreateContainer within sandbox \"03aac996ea778df4cf1b7c643e33d9adf7a45c345e1170d1d4bf9aaee79b423d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2e92790902790d1eabbd5cb0a5e18b0f403a7e4efb73260efec875678af0c967\"" Oct 13 00:01:48.120675 containerd[2018]: time="2025-10-13T00:01:48.120601909Z" level=info msg="StartContainer for \"2e92790902790d1eabbd5cb0a5e18b0f403a7e4efb73260efec875678af0c967\"" Oct 13 00:01:48.131671 containerd[2018]: time="2025-10-13T00:01:48.131603209Z" level=info msg="connecting to shim 2e92790902790d1eabbd5cb0a5e18b0f403a7e4efb73260efec875678af0c967" address="unix:///run/containerd/s/62d24e230a983062d6e507106dc5a78980ebf7fca490b7b855384e0ed38045a2" protocol=ttrpc version=3 Oct 13 00:01:48.193783 systemd[1]: Started cri-containerd-2e92790902790d1eabbd5cb0a5e18b0f403a7e4efb73260efec875678af0c967.scope - libcontainer container 2e92790902790d1eabbd5cb0a5e18b0f403a7e4efb73260efec875678af0c967. Oct 13 00:01:48.466682 containerd[2018]: time="2025-10-13T00:01:48.465972675Z" level=info msg="StartContainer for \"2e92790902790d1eabbd5cb0a5e18b0f403a7e4efb73260efec875678af0c967\" returns successfully" Oct 13 00:01:48.678595 kubelet[3330]: I1013 00:01:48.678532 3330 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 00:01:48.682656 kubelet[3330]: I1013 00:01:48.678609 3330 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 00:01:48.726956 sshd[6120]: Connection closed by 139.178.89.65 port 38086 Oct 13 00:01:48.728234 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:48.741498 systemd[1]: sshd@19-172.31.31.230:22-139.178.89.65:38086.service: Deactivated successfully. Oct 13 00:01:48.748401 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 00:01:48.775341 systemd-logind[1987]: Session 20 logged out. Waiting for processes to exit. Oct 13 00:01:48.779134 systemd[1]: Started sshd@20-172.31.31.230:22-139.178.89.65:38088.service - OpenSSH per-connection server daemon (139.178.89.65:38088). Oct 13 00:01:48.788011 systemd-logind[1987]: Removed session 20. Oct 13 00:01:49.010371 sshd[6172]: Accepted publickey for core from 139.178.89.65 port 38088 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:49.014104 sshd-session[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:49.028779 systemd-logind[1987]: New session 21 of user core. Oct 13 00:01:49.035019 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 00:01:50.117097 sshd[6177]: Connection closed by 139.178.89.65 port 38088 Oct 13 00:01:50.118140 sshd-session[6172]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:50.126904 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 00:01:50.129450 systemd[1]: sshd@20-172.31.31.230:22-139.178.89.65:38088.service: Deactivated successfully. Oct 13 00:01:50.138971 systemd-logind[1987]: Session 21 logged out. Waiting for processes to exit. Oct 13 00:01:50.164124 systemd[1]: Started sshd@21-172.31.31.230:22-139.178.89.65:38104.service - OpenSSH per-connection server daemon (139.178.89.65:38104). Oct 13 00:01:50.167612 systemd-logind[1987]: Removed session 21. Oct 13 00:01:50.384852 sshd[6192]: Accepted publickey for core from 139.178.89.65 port 38104 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:50.387041 sshd-session[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:50.407678 systemd-logind[1987]: New session 22 of user core. Oct 13 00:01:50.418884 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 00:01:50.741938 sshd[6195]: Connection closed by 139.178.89.65 port 38104 Oct 13 00:01:50.742665 sshd-session[6192]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:50.750383 systemd[1]: sshd@21-172.31.31.230:22-139.178.89.65:38104.service: Deactivated successfully. Oct 13 00:01:50.758549 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 00:01:50.763805 systemd-logind[1987]: Session 22 logged out. Waiting for processes to exit. Oct 13 00:01:50.767705 systemd-logind[1987]: Removed session 22. Oct 13 00:01:51.071105 containerd[2018]: time="2025-10-13T00:01:51.071042500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\" id:\"5ff583ffd9093411964227f35d829e9a273a7a8270fc4aa86d77f79ee3c4b1ef\" pid:6220 exited_at:{seconds:1760313711 nanos:70203184}" Oct 13 00:01:51.106946 kubelet[3330]: I1013 00:01:51.106807 3330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h6q2t" podStartSLOduration=71.311327663 podStartE2EDuration="1m29.106781704s" podCreationTimestamp="2025-10-13 00:00:22 +0000 UTC" firstStartedPulling="2025-10-13 00:01:30.249535016 +0000 UTC m=+101.200825811" lastFinishedPulling="2025-10-13 00:01:48.044989057 +0000 UTC m=+118.996279852" observedRunningTime="2025-10-13 00:01:49.320309391 +0000 UTC m=+120.271600198" watchObservedRunningTime="2025-10-13 00:01:51.106781704 +0000 UTC m=+122.058072535" Oct 13 00:01:55.779787 systemd[1]: Started sshd@22-172.31.31.230:22-139.178.89.65:59340.service - OpenSSH per-connection server daemon (139.178.89.65:59340). Oct 13 00:01:55.989218 sshd[6237]: Accepted publickey for core from 139.178.89.65 port 59340 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:01:55.991932 sshd-session[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:01:56.000568 systemd-logind[1987]: New session 23 of user core. Oct 13 00:01:56.007972 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 00:01:56.251861 sshd[6240]: Connection closed by 139.178.89.65 port 59340 Oct 13 00:01:56.252053 sshd-session[6237]: pam_unix(sshd:session): session closed for user core Oct 13 00:01:56.258256 systemd-logind[1987]: Session 23 logged out. Waiting for processes to exit. Oct 13 00:01:56.259003 systemd[1]: sshd@22-172.31.31.230:22-139.178.89.65:59340.service: Deactivated successfully. Oct 13 00:01:56.263936 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 00:01:56.269595 systemd-logind[1987]: Removed session 23. Oct 13 00:01:58.335344 containerd[2018]: time="2025-10-13T00:01:58.335245428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\" id:\"1b0e719e81fdb35465e4db899b4cb7fc3782cf89ba872e31e052bb85bd01031c\" pid:6265 exited_at:{seconds:1760313718 nanos:334159176}" Oct 13 00:01:58.444785 containerd[2018]: time="2025-10-13T00:01:58.444694596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\" id:\"25cf96fb0c5b1cc81ba19320e6b1e380a7451a32be1c9d247a5bb3025ded5995\" pid:6288 exited_at:{seconds:1760313718 nanos:444340116}" Oct 13 00:02:01.227779 containerd[2018]: time="2025-10-13T00:02:01.226649966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\" id:\"1ffdc327f0ac0ecb03d997449cbafef1ff77781e85c44e4e2c7aca5d1ad57797\" pid:6310 exited_at:{seconds:1760313721 nanos:225769346}" Oct 13 00:02:01.294305 systemd[1]: Started sshd@23-172.31.31.230:22-139.178.89.65:59346.service - OpenSSH per-connection server daemon (139.178.89.65:59346). Oct 13 00:02:01.524351 sshd[6320]: Accepted publickey for core from 139.178.89.65 port 59346 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:02:01.527810 sshd-session[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:02:01.542798 systemd-logind[1987]: New session 24 of user core. Oct 13 00:02:01.549773 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 00:02:01.830129 sshd[6323]: Connection closed by 139.178.89.65 port 59346 Oct 13 00:02:01.831209 sshd-session[6320]: pam_unix(sshd:session): session closed for user core Oct 13 00:02:01.840614 systemd-logind[1987]: Session 24 logged out. Waiting for processes to exit. Oct 13 00:02:01.842299 systemd[1]: sshd@23-172.31.31.230:22-139.178.89.65:59346.service: Deactivated successfully. Oct 13 00:02:01.850594 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 00:02:01.855760 systemd-logind[1987]: Removed session 24. Oct 13 00:02:06.874548 systemd[1]: Started sshd@24-172.31.31.230:22-139.178.89.65:47970.service - OpenSSH per-connection server daemon (139.178.89.65:47970). Oct 13 00:02:07.087770 sshd[6341]: Accepted publickey for core from 139.178.89.65 port 47970 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:02:07.090942 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:02:07.101574 systemd-logind[1987]: New session 25 of user core. Oct 13 00:02:07.111309 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 00:02:07.407847 sshd[6344]: Connection closed by 139.178.89.65 port 47970 Oct 13 00:02:07.408844 sshd-session[6341]: pam_unix(sshd:session): session closed for user core Oct 13 00:02:07.419993 systemd[1]: sshd@24-172.31.31.230:22-139.178.89.65:47970.service: Deactivated successfully. Oct 13 00:02:07.425280 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 00:02:07.430606 systemd-logind[1987]: Session 25 logged out. Waiting for processes to exit. Oct 13 00:02:07.433633 systemd-logind[1987]: Removed session 25. Oct 13 00:02:12.450413 systemd[1]: Started sshd@25-172.31.31.230:22-139.178.89.65:32800.service - OpenSSH per-connection server daemon (139.178.89.65:32800). Oct 13 00:02:12.670377 sshd[6360]: Accepted publickey for core from 139.178.89.65 port 32800 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:02:12.672904 sshd-session[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:02:12.688088 systemd-logind[1987]: New session 26 of user core. Oct 13 00:02:12.696073 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 00:02:13.008106 sshd[6363]: Connection closed by 139.178.89.65 port 32800 Oct 13 00:02:13.008980 sshd-session[6360]: pam_unix(sshd:session): session closed for user core Oct 13 00:02:13.020264 systemd[1]: sshd@25-172.31.31.230:22-139.178.89.65:32800.service: Deactivated successfully. Oct 13 00:02:13.031705 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 00:02:13.034545 systemd-logind[1987]: Session 26 logged out. Waiting for processes to exit. Oct 13 00:02:13.039663 systemd-logind[1987]: Removed session 26. Oct 13 00:02:15.388321 containerd[2018]: time="2025-10-13T00:02:15.388097297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\" id:\"b6278a33aa03c6ee45d82242fcfd3fab9cfd94d2ef625c608f8f7b546c3a19f3\" pid:6386 exited_at:{seconds:1760313735 nanos:387510365}" Oct 13 00:02:18.061079 systemd[1]: Started sshd@26-172.31.31.230:22-139.178.89.65:32812.service - OpenSSH per-connection server daemon (139.178.89.65:32812). Oct 13 00:02:18.293294 sshd[6397]: Accepted publickey for core from 139.178.89.65 port 32812 ssh2: RSA SHA256:EfdkOo0fITwR0ZDbrozMGmKukA+GevflJqFl6Kj530A Oct 13 00:02:18.297704 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:02:18.310334 systemd-logind[1987]: New session 27 of user core. Oct 13 00:02:18.319033 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 00:02:18.666183 sshd[6400]: Connection closed by 139.178.89.65 port 32812 Oct 13 00:02:18.666015 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Oct 13 00:02:18.680637 systemd[1]: sshd@26-172.31.31.230:22-139.178.89.65:32812.service: Deactivated successfully. Oct 13 00:02:18.680994 systemd-logind[1987]: Session 27 logged out. Waiting for processes to exit. Oct 13 00:02:18.690568 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 00:02:18.698037 systemd-logind[1987]: Removed session 27. Oct 13 00:02:21.301580 containerd[2018]: time="2025-10-13T00:02:21.301517398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\" id:\"36e1c6c5d092537b6a7a8e0112802d3aa2a9023673f5447b14daf2f65e67965a\" pid:6424 exited_at:{seconds:1760313741 nanos:300619894}" Oct 13 00:02:31.143266 containerd[2018]: time="2025-10-13T00:02:31.143206063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dddfdb5d1afd602a5489b17dbda4bde67a4f1e9d1857ff3cb126de6a98c27a27\" id:\"2c3e801b460e5b94ebe200bd9e07de2b54ff9ce7fda7570d6afd2faae58c3bc9\" pid:6450 exited_at:{seconds:1760313751 nanos:142568203}" Oct 13 00:02:32.960434 systemd[1]: cri-containerd-4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf.scope: Deactivated successfully. Oct 13 00:02:32.962798 systemd[1]: cri-containerd-4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf.scope: Consumed 6.178s CPU time, 61.7M memory peak, 256K read from disk. Oct 13 00:02:32.972829 containerd[2018]: time="2025-10-13T00:02:32.972740100Z" level=info msg="received exit event container_id:\"4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf\" id:\"4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf\" pid:3166 exit_status:1 exited_at:{seconds:1760313752 nanos:970885152}" Oct 13 00:02:32.974858 containerd[2018]: time="2025-10-13T00:02:32.974796384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf\" id:\"4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf\" pid:3166 exit_status:1 exited_at:{seconds:1760313752 nanos:970885152}" Oct 13 00:02:33.023289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf-rootfs.mount: Deactivated successfully. Oct 13 00:02:33.382038 systemd[1]: cri-containerd-ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2.scope: Deactivated successfully. Oct 13 00:02:33.382680 systemd[1]: cri-containerd-ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2.scope: Consumed 27.561s CPU time, 96M memory peak, 608K read from disk. Oct 13 00:02:33.389940 containerd[2018]: time="2025-10-13T00:02:33.389866066Z" level=info msg="received exit event container_id:\"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\" id:\"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\" pid:3753 exit_status:1 exited_at:{seconds:1760313753 nanos:389310094}" Oct 13 00:02:33.391119 containerd[2018]: time="2025-10-13T00:02:33.391047742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\" id:\"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\" pid:3753 exit_status:1 exited_at:{seconds:1760313753 nanos:389310094}" Oct 13 00:02:33.431697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2-rootfs.mount: Deactivated successfully. Oct 13 00:02:33.450245 kubelet[3330]: I1013 00:02:33.450193 3330 scope.go:117] "RemoveContainer" containerID="4c1854c879a8b7b10a762bd4cffde7c2f32d94d0f5149fbe6e98598b823625bf" Oct 13 00:02:33.455500 containerd[2018]: time="2025-10-13T00:02:33.455446198Z" level=info msg="CreateContainer within sandbox \"2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 13 00:02:33.474596 containerd[2018]: time="2025-10-13T00:02:33.474533326Z" level=info msg="Container a509ffc2cc9e7ceecb9d585b9b04ee30a6868442f9eb5011c62d2b998d2a7e97: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:02:33.485475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577047147.mount: Deactivated successfully. Oct 13 00:02:33.497234 containerd[2018]: time="2025-10-13T00:02:33.497156879Z" level=info msg="CreateContainer within sandbox \"2489205591c9319f8a251b651a377897f9a806c03ddc5ee2938de31ba43ab758\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a509ffc2cc9e7ceecb9d585b9b04ee30a6868442f9eb5011c62d2b998d2a7e97\"" Oct 13 00:02:33.498416 containerd[2018]: time="2025-10-13T00:02:33.498353579Z" level=info msg="StartContainer for \"a509ffc2cc9e7ceecb9d585b9b04ee30a6868442f9eb5011c62d2b998d2a7e97\"" Oct 13 00:02:33.500508 containerd[2018]: time="2025-10-13T00:02:33.500441435Z" level=info msg="connecting to shim a509ffc2cc9e7ceecb9d585b9b04ee30a6868442f9eb5011c62d2b998d2a7e97" address="unix:///run/containerd/s/f65d4ab65d1fa49a876270bbbe02b40847a64046de6fc1b5208b6c7781a37e67" protocol=ttrpc version=3 Oct 13 00:02:33.547331 systemd[1]: Started cri-containerd-a509ffc2cc9e7ceecb9d585b9b04ee30a6868442f9eb5011c62d2b998d2a7e97.scope - libcontainer container a509ffc2cc9e7ceecb9d585b9b04ee30a6868442f9eb5011c62d2b998d2a7e97. Oct 13 00:02:33.640637 containerd[2018]: time="2025-10-13T00:02:33.640165847Z" level=info msg="StartContainer for \"a509ffc2cc9e7ceecb9d585b9b04ee30a6868442f9eb5011c62d2b998d2a7e97\" returns successfully" Oct 13 00:02:34.465830 kubelet[3330]: I1013 00:02:34.465786 3330 scope.go:117] "RemoveContainer" containerID="ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2" Oct 13 00:02:34.471276 containerd[2018]: time="2025-10-13T00:02:34.471197291Z" level=info msg="CreateContainer within sandbox \"5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 13 00:02:34.497171 containerd[2018]: time="2025-10-13T00:02:34.494576388Z" level=info msg="Container 00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:02:34.515836 containerd[2018]: time="2025-10-13T00:02:34.515651844Z" level=info msg="CreateContainer within sandbox \"5d5ab0d91e391939ef33c76c7bef869141be27496571227b39083c7a56968754\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e\"" Oct 13 00:02:34.518742 containerd[2018]: time="2025-10-13T00:02:34.516838920Z" level=info msg="StartContainer for \"00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e\"" Oct 13 00:02:34.521741 containerd[2018]: time="2025-10-13T00:02:34.519578436Z" level=info msg="connecting to shim 00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e" address="unix:///run/containerd/s/377494bad29d4119778f6961854408a7a39173a7d3c1a22c92956992a07f7c7f" protocol=ttrpc version=3 Oct 13 00:02:34.575044 systemd[1]: Started cri-containerd-00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e.scope - libcontainer container 00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e. Oct 13 00:02:34.657027 containerd[2018]: time="2025-10-13T00:02:34.656124336Z" level=info msg="StartContainer for \"00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e\" returns successfully" Oct 13 00:02:37.233115 systemd[1]: cri-containerd-a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1.scope: Deactivated successfully. Oct 13 00:02:37.233709 systemd[1]: cri-containerd-a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1.scope: Consumed 4.881s CPU time, 21M memory peak. Oct 13 00:02:37.241040 containerd[2018]: time="2025-10-13T00:02:37.240683953Z" level=info msg="received exit event container_id:\"a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1\" id:\"a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1\" pid:3173 exit_status:1 exited_at:{seconds:1760313757 nanos:239999413}" Oct 13 00:02:37.242236 containerd[2018]: time="2025-10-13T00:02:37.241881025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1\" id:\"a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1\" pid:3173 exit_status:1 exited_at:{seconds:1760313757 nanos:239999413}" Oct 13 00:02:37.284896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1-rootfs.mount: Deactivated successfully. Oct 13 00:02:37.500611 kubelet[3330]: I1013 00:02:37.499875 3330 scope.go:117] "RemoveContainer" containerID="a093bbabfe4bad4fb017f6f241cdc599e36e8a45f6c3d5fff6cf3300e6a8fee1" Oct 13 00:02:37.505167 containerd[2018]: time="2025-10-13T00:02:37.505108311Z" level=info msg="CreateContainer within sandbox \"c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 13 00:02:37.527141 containerd[2018]: time="2025-10-13T00:02:37.525069027Z" level=info msg="Container 20f979819c32450c87e7c9b265ae5076abb520dd8da27ae0c7603021eaabd0f0: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:02:37.535547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840665461.mount: Deactivated successfully. Oct 13 00:02:37.550847 containerd[2018]: time="2025-10-13T00:02:37.550707891Z" level=info msg="CreateContainer within sandbox \"c188c72a2655f63280262fa69b78094eae8732894ddfd63bc90070d571350e75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"20f979819c32450c87e7c9b265ae5076abb520dd8da27ae0c7603021eaabd0f0\"" Oct 13 00:02:37.552077 containerd[2018]: time="2025-10-13T00:02:37.552007083Z" level=info msg="StartContainer for \"20f979819c32450c87e7c9b265ae5076abb520dd8da27ae0c7603021eaabd0f0\"" Oct 13 00:02:37.554185 containerd[2018]: time="2025-10-13T00:02:37.554121891Z" level=info msg="connecting to shim 20f979819c32450c87e7c9b265ae5076abb520dd8da27ae0c7603021eaabd0f0" address="unix:///run/containerd/s/85944eda903bbdb332b8e24efa31961ab743381fdebb515e700ca97415960202" protocol=ttrpc version=3 Oct 13 00:02:37.596023 systemd[1]: Started cri-containerd-20f979819c32450c87e7c9b265ae5076abb520dd8da27ae0c7603021eaabd0f0.scope - libcontainer container 20f979819c32450c87e7c9b265ae5076abb520dd8da27ae0c7603021eaabd0f0. Oct 13 00:02:37.678166 containerd[2018]: time="2025-10-13T00:02:37.678092655Z" level=info msg="StartContainer for \"20f979819c32450c87e7c9b265ae5076abb520dd8da27ae0c7603021eaabd0f0\" returns successfully" Oct 13 00:02:41.795302 kubelet[3330]: E1013 00:02:41.794545 3330 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-230?timeout=10s\": context deadline exceeded" Oct 13 00:02:45.306876 containerd[2018]: time="2025-10-13T00:02:45.306068409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db2e0152fd4f09581ca361feb4b4c6617d2f28d61064087fd42d3a9ea57cd75d\" id:\"b99e5f117ec47f849472ae9a5c801a9a9a69d5bcfcf0cc89e4bc6fbc6cc3d44f\" pid:6609 exited_at:{seconds:1760313765 nanos:305526765}" Oct 13 00:02:46.102311 systemd[1]: cri-containerd-00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e.scope: Deactivated successfully. Oct 13 00:02:46.104867 containerd[2018]: time="2025-10-13T00:02:46.103135845Z" level=info msg="received exit event container_id:\"00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e\" id:\"00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e\" pid:6525 exit_status:1 exited_at:{seconds:1760313766 nanos:102450669}" Oct 13 00:02:46.104867 containerd[2018]: time="2025-10-13T00:02:46.103580673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e\" id:\"00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e\" pid:6525 exit_status:1 exited_at:{seconds:1760313766 nanos:102450669}" Oct 13 00:02:46.143388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e-rootfs.mount: Deactivated successfully. Oct 13 00:02:46.548986 kubelet[3330]: I1013 00:02:46.548641 3330 scope.go:117] "RemoveContainer" containerID="ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2" Oct 13 00:02:46.548986 kubelet[3330]: I1013 00:02:46.548942 3330 scope.go:117] "RemoveContainer" containerID="00cbddaf70ae7661cc313fb2541fde99460f730c14307c0e3bfb361968fece8e" Oct 13 00:02:46.550119 kubelet[3330]: E1013 00:02:46.549978 3330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-db78d5bd4-cms8q_tigera-operator(c33a75ad-7417-4481-80b7-72f12b1520d3)\"" pod="tigera-operator/tigera-operator-db78d5bd4-cms8q" podUID="c33a75ad-7417-4481-80b7-72f12b1520d3" Oct 13 00:02:46.552820 containerd[2018]: time="2025-10-13T00:02:46.552774779Z" level=info msg="RemoveContainer for \"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\"" Oct 13 00:02:46.565559 containerd[2018]: time="2025-10-13T00:02:46.565374444Z" level=info msg="RemoveContainer for \"ab2c5577f2d4304a81466e5f5b3a4bc058e664898a02811bb46504f7c12b1fa2\" returns successfully" Oct 13 00:02:51.085101 containerd[2018]: time="2025-10-13T00:02:51.085035266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738233da189d0fb743c3d56fb029c42f4d26bd97d9eb143a676849c52e6f5e66\" id:\"1402529ba7978962477256f64498110f1ddd800d41bfec04d1794a826b4f1b16\" pid:6647 exited_at:{seconds:1760313771 nanos:84058058}" Oct 13 00:02:51.795394 kubelet[3330]: E1013 00:02:51.795287 3330 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"