Sep 12 17:05:51.128328 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 17:05:51.128390 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:05:51.128419 kernel: KASLR disabled due to lack of seed Sep 12 17:05:51.128436 kernel: efi: EFI v2.7 by EDK II Sep 12 17:05:51.128451 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Sep 12 17:05:51.128466 kernel: secureboot: Secure boot disabled Sep 12 17:05:51.128484 kernel: ACPI: Early table checksum verification disabled Sep 12 17:05:51.128498 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 17:05:51.128514 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:05:51.128530 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:05:51.128545 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 17:05:51.128565 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 12 17:05:51.128580 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:05:51.128595 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 17:05:51.128613 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 17:05:51.128628 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 17:05:51.128649 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:05:51.128665 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 17:05:51.128681 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 17:05:51.128697 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 17:05:51.128713 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 17:05:51.128729 kernel: printk: legacy bootconsole [uart0] enabled Sep 12 17:05:51.128745 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:05:51.128761 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:05:51.128777 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 12 17:05:51.128793 kernel: Zone ranges: Sep 12 17:05:51.128809 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:05:51.128829 kernel: DMA32 empty Sep 12 17:05:51.128845 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 17:05:51.128861 kernel: Device empty Sep 12 17:05:51.128877 kernel: Movable zone start for each node Sep 12 17:05:51.128892 kernel: Early memory node ranges Sep 12 17:05:51.128908 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 17:05:51.128924 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 17:05:51.128939 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 17:05:51.128955 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 17:05:51.128971 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 17:05:51.128987 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 17:05:51.129003 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 17:05:51.129024 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 17:05:51.129047 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:05:51.129063 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 17:05:51.129081 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 12 17:05:51.129097 kernel: psci: probing for conduit method from ACPI. Sep 12 17:05:51.129118 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 17:05:51.129135 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:05:51.129151 kernel: psci: Trusted OS migration not required Sep 12 17:05:51.129167 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:05:51.129184 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 17:05:51.129200 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:05:51.129217 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:05:51.129234 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:05:51.129251 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:05:51.129267 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:05:51.129284 kernel: CPU features: detected: Spectre-v2 Sep 12 17:05:51.129334 kernel: CPU features: detected: Spectre-v3a Sep 12 17:05:51.129352 kernel: CPU features: detected: Spectre-BHB Sep 12 17:05:51.129369 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 17:05:51.129386 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 17:05:51.129403 kernel: alternatives: applying boot alternatives Sep 12 17:05:51.129421 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:05:51.129439 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:05:51.129456 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:05:51.129473 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:05:51.129489 kernel: Fallback order for Node 0: 0 Sep 12 17:05:51.129510 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 12 17:05:51.129527 kernel: Policy zone: Normal Sep 12 17:05:51.129544 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:05:51.129560 kernel: software IO TLB: area num 2. Sep 12 17:05:51.129577 kernel: software IO TLB: mapped [mem 0x000000006c600000-0x0000000070600000] (64MB) Sep 12 17:05:51.129594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:05:51.129610 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:05:51.129628 kernel: rcu: RCU event tracing is enabled. Sep 12 17:05:51.129645 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:05:51.129662 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:05:51.129680 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:05:51.129697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:05:51.129718 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:05:51.129735 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:05:51.129752 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:05:51.129770 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:05:51.129787 kernel: GICv3: 96 SPIs implemented Sep 12 17:05:51.129804 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:05:51.129820 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:05:51.129837 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 17:05:51.129853 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 17:05:51.129870 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 17:05:51.129886 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 17:05:51.129903 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:05:51.129924 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:05:51.129941 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 12 17:05:51.129957 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 17:05:51.129973 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 12 17:05:51.129990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:05:51.130006 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 17:05:51.130023 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 17:05:51.130040 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 17:05:51.130056 kernel: Console: colour dummy device 80x25 Sep 12 17:05:51.130074 kernel: printk: legacy console [tty1] enabled Sep 12 17:05:51.130090 kernel: ACPI: Core revision 20240827 Sep 12 17:05:51.130112 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 17:05:51.130129 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:05:51.130146 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:05:51.130163 kernel: landlock: Up and running. Sep 12 17:05:51.130179 kernel: SELinux: Initializing. Sep 12 17:05:51.130197 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:05:51.130214 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:05:51.130230 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:05:51.130247 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:05:51.130269 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:05:51.130286 kernel: Remapping and enabling EFI services. Sep 12 17:05:51.130353 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:05:51.130371 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:05:51.130389 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 17:05:51.130406 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 12 17:05:51.130424 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 17:05:51.130441 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:05:51.130458 kernel: SMP: Total of 2 processors activated. Sep 12 17:05:51.130490 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:05:51.130508 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:05:51.130529 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 17:05:51.130547 kernel: CPU features: detected: CRC32 instructions Sep 12 17:05:51.130565 kernel: alternatives: applying system-wide alternatives Sep 12 17:05:51.130583 kernel: Memory: 3797096K/4030464K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 212024K reserved, 16384K cma-reserved) Sep 12 17:05:51.130601 kernel: devtmpfs: initialized Sep 12 17:05:51.130623 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:05:51.130641 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:05:51.130659 kernel: 17056 pages in range for non-PLT usage Sep 12 17:05:51.130677 kernel: 508576 pages in range for PLT usage Sep 12 17:05:51.130695 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:05:51.130712 kernel: SMBIOS 3.0.0 present. Sep 12 17:05:51.130730 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 17:05:51.130748 kernel: DMI: Memory slots populated: 0/0 Sep 12 17:05:51.130766 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:05:51.130787 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:05:51.130806 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:05:51.130824 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:05:51.130841 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:05:51.130859 kernel: audit: type=2000 audit(0.259:1): state=initialized audit_enabled=0 res=1 Sep 12 17:05:51.130877 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:05:51.130894 kernel: cpuidle: using governor menu Sep 12 17:05:51.130912 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:05:51.130929 kernel: ASID allocator initialised with 65536 entries Sep 12 17:05:51.130951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:05:51.130969 kernel: Serial: AMBA PL011 UART driver Sep 12 17:05:51.130987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:05:51.131005 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:05:51.131022 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:05:51.131041 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:05:51.131058 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:05:51.131076 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:05:51.131094 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:05:51.131116 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:05:51.131133 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:05:51.131151 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:05:51.131168 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:05:51.131186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:05:51.131204 kernel: ACPI: Interpreter enabled Sep 12 17:05:51.131221 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:05:51.131239 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:05:51.131257 kernel: ACPI: CPU0 has been hot-added Sep 12 17:05:51.131279 kernel: ACPI: CPU1 has been hot-added Sep 12 17:05:51.131323 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 17:05:51.131643 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:05:51.131831 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:05:51.132012 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:05:51.132192 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 17:05:51.132424 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 17:05:51.132456 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 17:05:51.132475 kernel: acpiphp: Slot [1] registered Sep 12 17:05:51.132493 kernel: acpiphp: Slot [2] registered Sep 12 17:05:51.132511 kernel: acpiphp: Slot [3] registered Sep 12 17:05:51.132529 kernel: acpiphp: Slot [4] registered Sep 12 17:05:51.132546 kernel: acpiphp: Slot [5] registered Sep 12 17:05:51.132564 kernel: acpiphp: Slot [6] registered Sep 12 17:05:51.132581 kernel: acpiphp: Slot [7] registered Sep 12 17:05:51.132599 kernel: acpiphp: Slot [8] registered Sep 12 17:05:51.132617 kernel: acpiphp: Slot [9] registered Sep 12 17:05:51.132639 kernel: acpiphp: Slot [10] registered Sep 12 17:05:51.132657 kernel: acpiphp: Slot [11] registered Sep 12 17:05:51.132675 kernel: acpiphp: Slot [12] registered Sep 12 17:05:51.132692 kernel: acpiphp: Slot [13] registered Sep 12 17:05:51.132709 kernel: acpiphp: Slot [14] registered Sep 12 17:05:51.132727 kernel: acpiphp: Slot [15] registered Sep 12 17:05:51.132745 kernel: acpiphp: Slot [16] registered Sep 12 17:05:51.132762 kernel: acpiphp: Slot [17] registered Sep 12 17:05:51.132780 kernel: acpiphp: Slot [18] registered Sep 12 17:05:51.132801 kernel: acpiphp: Slot [19] registered Sep 12 17:05:51.132819 kernel: acpiphp: Slot [20] registered Sep 12 17:05:51.132836 kernel: acpiphp: Slot [21] registered Sep 12 17:05:51.132854 kernel: acpiphp: Slot [22] registered Sep 12 17:05:51.132871 kernel: acpiphp: Slot [23] registered Sep 12 17:05:51.132889 kernel: acpiphp: Slot [24] registered Sep 12 17:05:51.132906 kernel: acpiphp: Slot [25] registered Sep 12 17:05:51.132924 kernel: acpiphp: Slot [26] registered Sep 12 17:05:51.132941 kernel: acpiphp: Slot [27] registered Sep 12 17:05:51.132959 kernel: acpiphp: Slot [28] registered Sep 12 17:05:51.132981 kernel: acpiphp: Slot [29] registered Sep 12 17:05:51.132999 kernel: acpiphp: Slot [30] registered Sep 12 17:05:51.133017 kernel: acpiphp: Slot [31] registered Sep 12 17:05:51.133035 kernel: PCI host bridge to bus 0000:00 Sep 12 17:05:51.133227 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 17:05:51.133446 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:05:51.133635 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 17:05:51.133817 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 17:05:51.134068 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:05:51.135398 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 12 17:05:51.135702 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 12 17:05:51.135919 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 12 17:05:51.136115 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 12 17:05:51.137117 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:05:51.137445 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 12 17:05:51.137652 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 12 17:05:51.137844 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 12 17:05:51.138033 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 12 17:05:51.138219 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:05:51.144639 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 12 17:05:51.144846 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 12 17:05:51.145044 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 12 17:05:51.145228 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 12 17:05:51.145846 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 12 17:05:51.146036 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 17:05:51.146202 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:05:51.146410 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 17:05:51.146438 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:05:51.146466 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:05:51.146485 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:05:51.146503 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:05:51.146522 kernel: iommu: Default domain type: Translated Sep 12 17:05:51.146542 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:05:51.146560 kernel: efivars: Registered efivars operations Sep 12 17:05:51.146578 kernel: vgaarb: loaded Sep 12 17:05:51.146596 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:05:51.146613 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:05:51.146636 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:05:51.146654 kernel: pnp: PnP ACPI init Sep 12 17:05:51.146874 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 17:05:51.146903 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:05:51.146922 kernel: NET: Registered PF_INET protocol family Sep 12 17:05:51.146940 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:05:51.146959 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:05:51.146976 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:05:51.147000 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:05:51.147018 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:05:51.147036 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:05:51.147054 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:05:51.147072 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:05:51.147090 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:05:51.147107 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:05:51.147125 kernel: kvm [1]: HYP mode not available Sep 12 17:05:51.147142 kernel: Initialise system trusted keyrings Sep 12 17:05:51.147165 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:05:51.147183 kernel: Key type asymmetric registered Sep 12 17:05:51.147200 kernel: Asymmetric key parser 'x509' registered Sep 12 17:05:51.147218 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:05:51.147235 kernel: io scheduler mq-deadline registered Sep 12 17:05:51.147253 kernel: io scheduler kyber registered Sep 12 17:05:51.147272 kernel: io scheduler bfq registered Sep 12 17:05:51.147497 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 17:05:51.147531 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:05:51.147550 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:05:51.147568 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 17:05:51.147587 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 17:05:51.147605 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:05:51.147624 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:05:51.147811 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 17:05:51.147836 kernel: printk: legacy console [ttyS0] disabled Sep 12 17:05:51.147855 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 17:05:51.147878 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:05:51.147896 kernel: printk: legacy bootconsole [uart0] disabled Sep 12 17:05:51.147914 kernel: thunder_xcv, ver 1.0 Sep 12 17:05:51.147932 kernel: thunder_bgx, ver 1.0 Sep 12 17:05:51.147949 kernel: nicpf, ver 1.0 Sep 12 17:05:51.147967 kernel: nicvf, ver 1.0 Sep 12 17:05:51.148174 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:05:51.148437 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:05:50 UTC (1757696750) Sep 12 17:05:51.148478 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:05:51.148497 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 12 17:05:51.148517 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:05:51.148536 kernel: watchdog: NMI not fully supported Sep 12 17:05:51.148554 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:05:51.148572 kernel: Segment Routing with IPv6 Sep 12 17:05:51.148590 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:05:51.148608 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:05:51.148626 kernel: Key type dns_resolver registered Sep 12 17:05:51.148649 kernel: registered taskstats version 1 Sep 12 17:05:51.148667 kernel: Loading compiled-in X.509 certificates Sep 12 17:05:51.148685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:05:51.148702 kernel: Demotion targets for Node 0: null Sep 12 17:05:51.148720 kernel: Key type .fscrypt registered Sep 12 17:05:51.148738 kernel: Key type fscrypt-provisioning registered Sep 12 17:05:51.148755 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:05:51.148773 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:05:51.148791 kernel: ima: No architecture policies found Sep 12 17:05:51.148813 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:05:51.148831 kernel: clk: Disabling unused clocks Sep 12 17:05:51.148849 kernel: PM: genpd: Disabling unused power domains Sep 12 17:05:51.148866 kernel: Warning: unable to open an initial console. Sep 12 17:05:51.148884 kernel: Freeing unused kernel memory: 38912K Sep 12 17:05:51.148902 kernel: Run /init as init process Sep 12 17:05:51.148919 kernel: with arguments: Sep 12 17:05:51.148937 kernel: /init Sep 12 17:05:51.148954 kernel: with environment: Sep 12 17:05:51.148971 kernel: HOME=/ Sep 12 17:05:51.148993 kernel: TERM=linux Sep 12 17:05:51.149011 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:05:51.149031 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:05:51.149056 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:05:51.149077 systemd[1]: Detected virtualization amazon. Sep 12 17:05:51.149095 systemd[1]: Detected architecture arm64. Sep 12 17:05:51.149114 systemd[1]: Running in initrd. Sep 12 17:05:51.149138 systemd[1]: No hostname configured, using default hostname. Sep 12 17:05:51.149159 systemd[1]: Hostname set to . Sep 12 17:05:51.149178 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:05:51.149198 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:05:51.149218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:05:51.149237 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:05:51.149258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:05:51.149278 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:05:51.149388 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:05:51.149411 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:05:51.149434 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:05:51.149454 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:05:51.149474 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:05:51.149494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:05:51.149513 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:05:51.149541 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:05:51.149561 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:05:51.149581 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:05:51.149600 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:05:51.149621 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:05:51.149641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:05:51.149660 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:05:51.149680 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:05:51.149705 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:05:51.149725 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:05:51.149745 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:05:51.149765 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:05:51.149784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:05:51.149804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:05:51.149824 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:05:51.149844 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:05:51.149864 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:05:51.149888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:05:51.149908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:51.149928 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:05:51.149948 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:05:51.149973 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:05:51.149994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:05:51.150013 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:05:51.150077 systemd-journald[258]: Collecting audit messages is disabled. Sep 12 17:05:51.150123 kernel: Bridge firewalling registered Sep 12 17:05:51.150159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:05:51.150180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:05:51.150200 systemd-journald[258]: Journal started Sep 12 17:05:51.150241 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2198f43284ab2607cf9651ab83a696) is 8M, max 75.3M, 67.3M free. Sep 12 17:05:51.079813 systemd-modules-load[259]: Inserted module 'overlay' Sep 12 17:05:51.126485 systemd-modules-load[259]: Inserted module 'br_netfilter' Sep 12 17:05:51.165888 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:05:51.166863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:51.173868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:05:51.186060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:05:51.201753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:05:51.221689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:05:51.237999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:05:51.251576 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:05:51.268670 systemd-tmpfiles[286]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:05:51.277726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:05:51.287901 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:05:51.295278 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:05:51.311551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:05:51.376897 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:05:51.431549 systemd-resolved[300]: Positive Trust Anchors: Sep 12 17:05:51.432111 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:05:51.432178 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:05:51.572332 kernel: SCSI subsystem initialized Sep 12 17:05:51.580321 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:05:51.593598 kernel: iscsi: registered transport (tcp) Sep 12 17:05:51.616327 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:05:51.616414 kernel: QLogic iSCSI HBA Driver Sep 12 17:05:51.655498 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:05:51.691105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:05:51.703394 kernel: random: crng init done Sep 12 17:05:51.704128 systemd-resolved[300]: Defaulting to hostname 'linux'. Sep 12 17:05:51.708611 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:05:51.716753 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:05:51.726057 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:05:51.819970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:05:51.833014 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:05:51.928403 kernel: raid6: neonx8 gen() 6301 MB/s Sep 12 17:05:51.945360 kernel: raid6: neonx4 gen() 6303 MB/s Sep 12 17:05:51.963346 kernel: raid6: neonx2 gen() 5305 MB/s Sep 12 17:05:51.981383 kernel: raid6: neonx1 gen() 3854 MB/s Sep 12 17:05:51.998358 kernel: raid6: int64x8 gen() 3577 MB/s Sep 12 17:05:52.016387 kernel: raid6: int64x4 gen() 3590 MB/s Sep 12 17:05:52.034371 kernel: raid6: int64x2 gen() 3455 MB/s Sep 12 17:05:52.052354 kernel: raid6: int64x1 gen() 2734 MB/s Sep 12 17:05:52.052441 kernel: raid6: using algorithm neonx4 gen() 6303 MB/s Sep 12 17:05:52.071353 kernel: raid6: .... xor() 4863 MB/s, rmw enabled Sep 12 17:05:52.071434 kernel: raid6: using neon recovery algorithm Sep 12 17:05:52.080838 kernel: xor: measuring software checksum speed Sep 12 17:05:52.080918 kernel: 8regs : 12954 MB/sec Sep 12 17:05:52.082018 kernel: 32regs : 12946 MB/sec Sep 12 17:05:52.084389 kernel: arm64_neon : 8573 MB/sec Sep 12 17:05:52.084460 kernel: xor: using function: 8regs (12954 MB/sec) Sep 12 17:05:52.179370 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:05:52.191948 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:05:52.200575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:05:52.244272 systemd-udevd[508]: Using default interface naming scheme 'v255'. Sep 12 17:05:52.255068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:05:52.275562 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:05:52.321777 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Sep 12 17:05:52.381460 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:05:52.391420 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:05:52.528870 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:05:52.538129 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:05:52.697382 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:05:52.697447 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 17:05:52.714677 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:05:52.714742 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:05:52.725331 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:05:52.725697 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:05:52.734333 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:05:52.744540 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:53:61:2b:a0:29 Sep 12 17:05:52.744858 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:05:52.747723 kernel: GPT:9289727 != 16777215 Sep 12 17:05:52.747774 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:05:52.749125 kernel: GPT:9289727 != 16777215 Sep 12 17:05:52.750664 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:05:52.752456 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:05:52.757729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:05:52.760839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:52.764952 (udev-worker)[552]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:05:52.765772 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:52.785727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:52.793803 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:05:52.824423 kernel: nvme nvme0: using unchecked data buffer Sep 12 17:05:52.852388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:52.987486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:05:53.075227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:05:53.081360 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:05:53.115716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:05:53.141078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:05:53.148725 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:05:53.160101 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:05:53.163801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:05:53.172145 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:05:53.180862 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:05:53.190015 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:05:53.241052 disk-uuid[686]: Primary Header is updated. Sep 12 17:05:53.241052 disk-uuid[686]: Secondary Entries is updated. Sep 12 17:05:53.241052 disk-uuid[686]: Secondary Header is updated. Sep 12 17:05:53.258325 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:05:53.266906 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:05:54.286352 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:05:54.288677 disk-uuid[689]: The operation has completed successfully. Sep 12 17:05:54.468451 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:05:54.468672 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:05:54.569398 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:05:54.609987 sh[954]: Success Sep 12 17:05:54.644231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:05:54.644322 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:05:54.646284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:05:54.660333 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:05:54.779273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:05:54.787482 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:05:54.808399 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:05:54.832332 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (989) Sep 12 17:05:54.832435 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:05:54.835593 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:05:55.010202 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:05:55.010276 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:05:55.010325 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:05:55.036682 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:05:55.037508 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:05:55.047245 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:05:55.053231 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:05:55.060544 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:05:55.111334 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1019) Sep 12 17:05:55.116014 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:05:55.116102 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:05:55.135481 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:05:55.135555 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:05:55.144399 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:05:55.148080 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:05:55.157932 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:05:55.269489 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:05:55.283944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:05:55.365455 systemd-networkd[1158]: lo: Link UP Sep 12 17:05:55.365468 systemd-networkd[1158]: lo: Gained carrier Sep 12 17:05:55.369909 systemd-networkd[1158]: Enumeration completed Sep 12 17:05:55.370561 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:05:55.370812 systemd-networkd[1158]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:05:55.370819 systemd-networkd[1158]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:05:55.381898 systemd-networkd[1158]: eth0: Link UP Sep 12 17:05:55.381917 systemd-networkd[1158]: eth0: Gained carrier Sep 12 17:05:55.381939 systemd-networkd[1158]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:05:55.390818 systemd[1]: Reached target network.target - Network. Sep 12 17:05:55.410535 systemd-networkd[1158]: eth0: DHCPv4 address 172.31.16.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:05:55.769658 ignition[1078]: Ignition 2.21.0 Sep 12 17:05:55.769680 ignition[1078]: Stage: fetch-offline Sep 12 17:05:55.771218 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:55.771245 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:05:55.772257 ignition[1078]: Ignition finished successfully Sep 12 17:05:55.784020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:05:55.788928 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:05:55.828990 ignition[1170]: Ignition 2.21.0 Sep 12 17:05:55.829593 ignition[1170]: Stage: fetch Sep 12 17:05:55.830191 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:55.830215 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:05:55.831173 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:05:55.852422 ignition[1170]: PUT result: OK Sep 12 17:05:55.857841 ignition[1170]: parsed url from cmdline: "" Sep 12 17:05:55.857881 ignition[1170]: no config URL provided Sep 12 17:05:55.857919 ignition[1170]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:05:55.857962 ignition[1170]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:05:55.858062 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:05:55.869936 ignition[1170]: PUT result: OK Sep 12 17:05:55.870348 ignition[1170]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:05:55.880810 ignition[1170]: GET result: OK Sep 12 17:05:55.883029 ignition[1170]: parsing config with SHA512: 0dca5d5073e3d0ec3834eec13ddcea71c3a5521433598ab81ff912460071b0e5ea10b5502cca267fb1c872e8b385c6a705f80a7c3a76449185dad378a18765ba Sep 12 17:05:55.899918 unknown[1170]: fetched base config from "system" Sep 12 17:05:55.900532 unknown[1170]: fetched base config from "system" Sep 12 17:05:55.903226 ignition[1170]: fetch: fetch complete Sep 12 17:05:55.900706 unknown[1170]: fetched user config from "aws" Sep 12 17:05:55.903240 ignition[1170]: fetch: fetch passed Sep 12 17:05:55.903958 ignition[1170]: Ignition finished successfully Sep 12 17:05:55.917382 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:05:55.924745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:05:55.981594 ignition[1177]: Ignition 2.21.0 Sep 12 17:05:55.981631 ignition[1177]: Stage: kargs Sep 12 17:05:55.982199 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:55.982230 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:05:55.982435 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:05:55.986364 ignition[1177]: PUT result: OK Sep 12 17:05:55.993170 ignition[1177]: kargs: kargs passed Sep 12 17:05:55.993313 ignition[1177]: Ignition finished successfully Sep 12 17:05:56.008396 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:05:56.013512 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:05:56.066241 ignition[1184]: Ignition 2.21.0 Sep 12 17:05:56.066276 ignition[1184]: Stage: disks Sep 12 17:05:56.066800 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:56.066824 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:05:56.067906 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:05:56.072766 ignition[1184]: PUT result: OK Sep 12 17:05:56.082647 ignition[1184]: disks: disks passed Sep 12 17:05:56.082770 ignition[1184]: Ignition finished successfully Sep 12 17:05:56.090379 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:05:56.094411 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:05:56.098752 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:05:56.107536 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:05:56.107824 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:05:56.117460 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:05:56.124744 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:05:56.184813 systemd-fsck[1193]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:05:56.189650 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:05:56.199136 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:05:56.337356 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:05:56.339990 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:05:56.345218 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:05:56.353771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:05:56.372683 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:05:56.373445 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:05:56.373536 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:05:56.373586 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:05:56.411415 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1212) Sep 12 17:05:56.411488 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:05:56.411534 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:05:56.424029 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:05:56.424121 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:05:56.426435 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:05:56.429997 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:05:56.439152 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:05:56.875034 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:05:56.895599 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:05:56.904970 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:05:56.914535 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:05:57.320062 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:05:57.326467 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:05:57.338690 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:05:57.364841 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:05:57.368122 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:05:57.401544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:05:57.408422 systemd-networkd[1158]: eth0: Gained IPv6LL Sep 12 17:05:57.423562 ignition[1330]: INFO : Ignition 2.21.0 Sep 12 17:05:57.423562 ignition[1330]: INFO : Stage: mount Sep 12 17:05:57.428275 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:57.428275 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:05:57.428275 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:05:57.437408 ignition[1330]: INFO : PUT result: OK Sep 12 17:05:57.446547 ignition[1330]: INFO : mount: mount passed Sep 12 17:05:57.446547 ignition[1330]: INFO : Ignition finished successfully Sep 12 17:05:57.450043 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:05:57.457592 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:05:57.502616 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:05:57.547371 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1342) Sep 12 17:05:57.551953 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:05:57.552179 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:05:57.561466 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:05:57.561595 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:05:57.565009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:05:57.626366 ignition[1359]: INFO : Ignition 2.21.0 Sep 12 17:05:57.626366 ignition[1359]: INFO : Stage: files Sep 12 17:05:57.626366 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:57.626366 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:05:57.640811 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:05:57.640811 ignition[1359]: INFO : PUT result: OK Sep 12 17:05:57.649018 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:05:57.656834 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:05:57.656834 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:05:57.695593 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:05:57.699713 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:05:57.703987 unknown[1359]: wrote ssh authorized keys file for user: core Sep 12 17:05:57.707331 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:05:57.714759 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:05:57.719822 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:05:57.820855 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:05:58.306553 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:05:58.311940 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:05:58.317183 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:05:58.321637 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:05:58.326407 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:05:58.330999 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:05:58.335644 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:05:58.340206 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:05:58.344980 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:05:58.354324 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:05:58.358933 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:05:58.363473 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:05:58.369684 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:05:58.369684 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:05:58.381682 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:05:58.873009 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:05:59.254700 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:05:59.254700 ignition[1359]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:05:59.267184 ignition[1359]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:05:59.272786 ignition[1359]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:05:59.272786 ignition[1359]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:05:59.272786 ignition[1359]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:05:59.272786 ignition[1359]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:05:59.289770 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:05:59.289770 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:05:59.289770 ignition[1359]: INFO : files: files passed Sep 12 17:05:59.289770 ignition[1359]: INFO : Ignition finished successfully Sep 12 17:05:59.305142 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:05:59.309928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:05:59.329718 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:05:59.345661 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:05:59.345880 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:05:59.362338 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:05:59.362338 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:05:59.370945 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:05:59.378802 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:05:59.386223 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:05:59.394533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:05:59.479384 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:05:59.481410 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:05:59.489735 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:05:59.495514 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:05:59.498743 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:05:59.501841 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:05:59.562405 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:05:59.570285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:05:59.605064 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:05:59.612914 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:05:59.617626 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:05:59.625550 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:05:59.625822 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:05:59.647491 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:05:59.654845 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:05:59.665181 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:05:59.668890 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:05:59.679683 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:05:59.684221 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:05:59.689892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:05:59.699145 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:05:59.703326 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:05:59.712955 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:05:59.716875 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:05:59.724656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:05:59.725253 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:05:59.735161 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:05:59.738930 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:05:59.745463 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:05:59.751064 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:05:59.755173 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:05:59.755492 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:05:59.766057 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:05:59.766916 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:05:59.772535 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:05:59.772890 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:05:59.788657 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:05:59.794576 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:05:59.799890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:05:59.804533 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:05:59.812177 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:05:59.813160 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:05:59.837142 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:05:59.837469 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:05:59.871097 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:05:59.877779 ignition[1413]: INFO : Ignition 2.21.0 Sep 12 17:05:59.880852 ignition[1413]: INFO : Stage: umount Sep 12 17:05:59.883360 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:59.883360 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:05:59.883360 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:05:59.894574 ignition[1413]: INFO : PUT result: OK Sep 12 17:05:59.906679 ignition[1413]: INFO : umount: umount passed Sep 12 17:05:59.909347 ignition[1413]: INFO : Ignition finished successfully Sep 12 17:05:59.914917 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:05:59.915162 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:05:59.918541 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:05:59.918645 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:05:59.928738 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:05:59.928835 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:05:59.932039 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:05:59.932122 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:05:59.943078 systemd[1]: Stopped target network.target - Network. Sep 12 17:05:59.947616 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:05:59.947744 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:05:59.948493 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:05:59.948911 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:05:59.978218 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:05:59.986252 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:05:59.991425 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:05:59.997799 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:05:59.997957 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:06:00.005683 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:06:00.005850 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:06:00.015232 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:06:00.015397 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:06:00.019160 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:06:00.019262 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:06:00.029247 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:06:00.033180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:06:00.060118 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:06:00.060732 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:06:00.069108 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:06:00.069738 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:06:00.071407 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:06:00.085146 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:06:00.087006 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:06:00.098923 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:06:00.099015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:06:00.104617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:06:00.117796 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:06:00.117946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:06:00.121546 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:06:00.121679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:06:00.139830 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:06:00.139943 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:06:00.146036 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:06:00.146151 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:06:00.161861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:06:00.174736 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:06:00.175469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:06:00.177379 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:06:00.181886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:06:00.196986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:06:00.197182 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:06:00.216725 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:06:00.218416 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:06:00.225738 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:06:00.225832 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:06:00.240449 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:06:00.240538 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:06:00.244173 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:06:00.244282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:06:00.261375 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:06:00.261517 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:06:00.265125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:06:00.265229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:06:00.268076 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:06:00.291445 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:06:00.291599 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:06:00.298833 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:06:00.298950 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:06:00.314875 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:06:00.314997 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:06:00.328192 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:06:00.328388 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:06:00.332616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:06:00.332735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:06:00.339344 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:06:00.339493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:06:00.339590 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:06:00.339691 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:06:00.343511 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:06:00.346370 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:06:00.364050 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:06:00.364519 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:06:00.402843 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:06:00.411201 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:06:00.444850 systemd[1]: Switching root. Sep 12 17:06:00.507655 systemd-journald[258]: Journal stopped Sep 12 17:06:03.216153 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Sep 12 17:06:03.218458 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:06:03.218537 kernel: SELinux: policy capability open_perms=1 Sep 12 17:06:03.218569 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:06:03.218600 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:06:03.218640 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:06:03.218675 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:06:03.218706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:06:03.218740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:06:03.218771 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:06:03.218804 kernel: audit: type=1403 audit(1757696760.979:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:06:03.218845 systemd[1]: Successfully loaded SELinux policy in 106.946ms. Sep 12 17:06:03.218909 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.226ms. Sep 12 17:06:03.218952 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:06:03.218991 systemd[1]: Detected virtualization amazon. Sep 12 17:06:03.219026 systemd[1]: Detected architecture arm64. Sep 12 17:06:03.219057 systemd[1]: Detected first boot. Sep 12 17:06:03.219090 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:06:03.219123 zram_generator::config[1456]: No configuration found. Sep 12 17:06:03.219160 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:06:03.219193 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:06:03.219229 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:06:03.219269 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:06:03.219346 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:06:03.219382 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:06:03.219415 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:06:03.219448 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:06:03.219477 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:06:03.219509 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:06:03.219542 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:06:03.219583 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:06:03.219616 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:06:03.219647 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:06:03.219678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:06:03.219710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:06:03.219739 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:06:03.219770 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:06:03.219799 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:06:03.219830 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:06:03.219863 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:06:03.219893 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:06:03.219923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:06:03.219952 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:06:03.219994 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:06:03.220032 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:06:03.220060 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:06:03.220089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:06:03.220125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:06:03.220157 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:06:03.220189 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:06:03.220217 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:06:03.220246 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:06:03.222361 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:06:03.222455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:06:03.222488 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:06:03.222522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:06:03.222564 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:06:03.222596 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:06:03.222626 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:06:03.222658 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:06:03.222687 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:06:03.222716 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:06:03.222745 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:06:03.222777 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:06:03.222807 systemd[1]: Reached target machines.target - Containers. Sep 12 17:06:03.222844 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:06:03.222879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:06:03.222921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:06:03.222953 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:06:03.222985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:06:03.223014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:06:03.223043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:06:03.223072 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:06:03.223108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:06:03.223138 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:06:03.223169 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:06:03.223211 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:06:03.223241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:06:03.223273 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:06:03.223344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:06:03.223380 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:06:03.223409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:06:03.223448 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:06:03.223477 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:06:03.223507 kernel: fuse: init (API version 7.41) Sep 12 17:06:03.223538 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:06:03.223568 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:06:03.223606 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:06:03.223643 systemd[1]: Stopped verity-setup.service. Sep 12 17:06:03.223673 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:06:03.223702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:06:03.223731 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:06:03.223765 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:06:03.223794 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:06:03.223827 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:06:03.223857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:06:03.223890 kernel: ACPI: bus type drm_connector registered Sep 12 17:06:03.223918 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:06:03.223947 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:06:03.223979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:06:03.224012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:06:03.224048 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:06:03.224078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:06:03.224108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:06:03.224137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:06:03.224166 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:06:03.224196 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:06:03.224224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:06:03.224258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:06:03.230442 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:06:03.230505 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:06:03.230537 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:06:03.230566 kernel: loop: module loaded Sep 12 17:06:03.230596 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:06:03.230627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:06:03.230679 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:06:03.230725 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:06:03.230789 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:06:03.230850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:06:03.230925 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:06:03.230990 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:06:03.231049 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:06:03.231192 systemd-journald[1538]: Collecting audit messages is disabled. Sep 12 17:06:03.231378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:06:03.231447 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:06:03.231499 systemd-journald[1538]: Journal started Sep 12 17:06:03.231579 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec2198f43284ab2607cf9651ab83a696) is 8M, max 75.3M, 67.3M free. Sep 12 17:06:02.366631 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:06:02.381261 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:06:02.382107 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:06:03.252366 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:06:03.252469 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:06:03.263606 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:06:03.270812 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:06:03.271251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:06:03.275255 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:06:03.282974 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:06:03.287776 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:06:03.321350 kernel: loop0: detected capacity change from 0 to 211168 Sep 12 17:06:03.359811 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:06:03.363573 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:06:03.366465 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:06:03.371902 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:06:03.379873 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:06:03.398533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:06:03.472750 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Sep 12 17:06:03.472809 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Sep 12 17:06:03.476056 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:06:03.481928 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:06:03.499423 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:06:03.505475 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec2198f43284ab2607cf9651ab83a696 is 113.906ms for 944 entries. Sep 12 17:06:03.505475 systemd-journald[1538]: System Journal (/var/log/journal/ec2198f43284ab2607cf9651ab83a696) is 8M, max 195.6M, 187.6M free. Sep 12 17:06:03.636731 systemd-journald[1538]: Received client request to flush runtime journal. Sep 12 17:06:03.636807 kernel: loop1: detected capacity change from 0 to 119320 Sep 12 17:06:03.529177 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:06:03.535921 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:06:03.643703 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:06:03.690329 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:06:03.705164 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:06:03.737466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:06:03.753452 kernel: loop2: detected capacity change from 0 to 61256 Sep 12 17:06:03.792042 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Sep 12 17:06:03.792567 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Sep 12 17:06:03.803622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:06:03.894439 kernel: loop3: detected capacity change from 0 to 100608 Sep 12 17:06:04.025274 kernel: loop4: detected capacity change from 0 to 211168 Sep 12 17:06:04.056354 kernel: loop5: detected capacity change from 0 to 119320 Sep 12 17:06:04.081879 kernel: loop6: detected capacity change from 0 to 61256 Sep 12 17:06:04.105333 kernel: loop7: detected capacity change from 0 to 100608 Sep 12 17:06:04.124220 (sd-merge)[1617]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:06:04.126033 (sd-merge)[1617]: Merged extensions into '/usr'. Sep 12 17:06:04.136693 systemd[1]: Reload requested from client PID 1572 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:06:04.136862 systemd[1]: Reloading... Sep 12 17:06:04.369141 zram_generator::config[1643]: No configuration found. Sep 12 17:06:04.965147 systemd[1]: Reloading finished in 827 ms. Sep 12 17:06:04.993260 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:06:04.999204 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:06:05.021681 systemd[1]: Starting ensure-sysext.service... Sep 12 17:06:05.029629 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:06:05.041892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:06:05.095610 systemd[1]: Reload requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:06:05.095659 systemd[1]: Reloading... Sep 12 17:06:05.101849 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:06:05.101968 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:06:05.103070 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:06:05.106998 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:06:05.118692 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:06:05.119869 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Sep 12 17:06:05.120113 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Sep 12 17:06:05.148904 systemd-udevd[1697]: Using default interface naming scheme 'v255'. Sep 12 17:06:05.171929 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:06:05.171985 systemd-tmpfiles[1696]: Skipping /boot Sep 12 17:06:05.193030 ldconfig[1568]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:06:05.212808 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:06:05.212837 systemd-tmpfiles[1696]: Skipping /boot Sep 12 17:06:05.474347 zram_generator::config[1772]: No configuration found. Sep 12 17:06:05.578584 (udev-worker)[1717]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:06:06.095158 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:06:06.096587 systemd[1]: Reloading finished in 999 ms. Sep 12 17:06:06.131578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:06:06.137513 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:06:06.143425 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:06:06.197856 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:06:06.209741 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:06:06.217151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:06:06.227890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:06:06.241770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:06:06.250769 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:06:06.270101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:06:06.308000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:06:06.326031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:06:06.337047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:06:06.340490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:06:06.340755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:06:06.366969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:06:06.371163 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:06:06.374357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:06:06.374659 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:06:06.375007 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:06:06.384059 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:06:06.389202 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:06:06.413504 systemd[1]: Finished ensure-sysext.service. Sep 12 17:06:06.480837 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:06:06.493933 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:06:06.498803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:06:06.500032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:06:06.557134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:06:06.562712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:06:06.580514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:06:06.598265 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:06:06.598843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:06:06.603167 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:06:06.603854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:06:06.608871 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:06:06.617481 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:06:06.622042 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:06:06.639689 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:06:06.669254 augenrules[1948]: No rules Sep 12 17:06:06.674109 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:06:06.675933 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:06:06.779941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:06:06.801456 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:06:06.839096 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:06:06.883077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:06:06.984393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:06:07.149745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:06:07.201248 systemd-networkd[1897]: lo: Link UP Sep 12 17:06:07.201330 systemd-networkd[1897]: lo: Gained carrier Sep 12 17:06:07.206155 systemd-networkd[1897]: Enumeration completed Sep 12 17:06:07.206515 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:06:07.213843 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:06:07.218573 systemd-networkd[1897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:06:07.218601 systemd-networkd[1897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:06:07.222964 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:06:07.229814 systemd-networkd[1897]: eth0: Link UP Sep 12 17:06:07.231085 systemd-resolved[1898]: Positive Trust Anchors: Sep 12 17:06:07.232975 systemd-networkd[1897]: eth0: Gained carrier Sep 12 17:06:07.233045 systemd-resolved[1898]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:06:07.233057 systemd-networkd[1897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:06:07.233113 systemd-resolved[1898]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:06:07.247500 systemd-networkd[1897]: eth0: DHCPv4 address 172.31.16.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:06:07.276930 systemd-resolved[1898]: Defaulting to hostname 'linux'. Sep 12 17:06:07.286209 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:06:07.291031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:06:07.295000 systemd[1]: Reached target network.target - Network. Sep 12 17:06:07.297848 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:06:07.301663 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:06:07.307751 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:06:07.313723 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:06:07.318035 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:06:07.321859 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:06:07.325825 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:06:07.330008 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:06:07.330067 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:06:07.332629 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:06:07.337076 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:06:07.343949 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:06:07.351780 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:06:07.357474 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:06:07.360998 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:06:07.368921 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:06:07.373816 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:06:07.378648 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:06:07.381892 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:06:07.384975 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:06:07.387595 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:06:07.387653 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:06:07.391471 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:06:07.397600 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:06:07.403781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:06:07.412813 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:06:07.419862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:06:07.429053 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:06:07.432365 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:06:07.439876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:06:07.459757 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:06:07.473751 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:06:07.480970 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:06:07.498989 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:06:07.521667 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:06:07.532960 jq[1984]: false Sep 12 17:06:07.542724 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:06:07.547719 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:06:07.548787 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:06:07.556758 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:06:07.567747 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:06:07.579630 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:06:07.584153 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:06:07.584917 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:06:07.647918 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:06:07.651619 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:06:07.657232 extend-filesystems[1985]: Found /dev/nvme0n1p6 Sep 12 17:06:07.680592 extend-filesystems[1985]: Found /dev/nvme0n1p9 Sep 12 17:06:07.695239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:06:07.718126 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Sep 12 17:06:07.736412 jq[1994]: true Sep 12 17:06:07.747243 tar[2003]: linux-arm64/LICENSE Sep 12 17:06:07.747243 tar[2003]: linux-arm64/helm Sep 12 17:06:07.794145 (ntainerd)[2017]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:06:07.820236 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Sep 12 17:06:07.825461 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:06:07.827420 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:06:07.834247 dbus-daemon[1982]: [system] SELinux support is enabled Sep 12 17:06:07.848531 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:06:07.866336 extend-filesystems[2036]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:06:07.883690 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:06:07.858916 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:06:07.858975 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:06:07.863594 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:06:07.863642 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:06:07.901926 dbus-daemon[1982]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1897 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:06:07.902328 jq[2028]: true Sep 12 17:06:07.928429 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:06:07.931200 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:06:07.950045 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:00:01 UTC 2025 (1): Starting Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:00:01 UTC 2025 (1): Starting Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: ---------------------------------------------------- Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 17:06:07.952954 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: ---------------------------------------------------- Sep 12 17:06:07.950101 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:06:07.955438 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:06:07.950120 ntpd[1987]: ---------------------------------------------------- Sep 12 17:06:07.950137 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:06:07.950154 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:06:07.950170 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 17:06:07.950188 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 17:06:07.950205 ntpd[1987]: ---------------------------------------------------- Sep 12 17:06:07.976573 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 12 17:06:07.977504 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 12 17:06:07.988326 ntpd[1987]: basedate set to 2025-08-31 Sep 12 17:06:07.988410 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 17:06:07.988566 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: basedate set to 2025-08-31 Sep 12 17:06:07.988566 ntpd[1987]: 12 Sep 17:06:07 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 17:06:08.004620 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:06:08.006655 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:06:08.014940 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: Listen normally on 3 eth0 172.31.16.146:123 Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: bind(21) AF_INET6 fe80::453:61ff:fe2b:a029%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: unable to create socket on eth0 (5) for fe80::453:61ff:fe2b:a029%2#123 Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: failed to init interface for address fe80::453:61ff:fe2b:a029%2 Sep 12 17:06:08.015921 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 17:06:08.015390 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:06:08.015464 ntpd[1987]: Listen normally on 3 eth0 172.31.16.146:123 Sep 12 17:06:08.015579 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 17:06:08.015667 ntpd[1987]: bind(21) AF_INET6 fe80::453:61ff:fe2b:a029%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:06:08.015711 ntpd[1987]: unable to create socket on eth0 (5) for fe80::453:61ff:fe2b:a029%2#123 Sep 12 17:06:08.015738 ntpd[1987]: failed to init interface for address fe80::453:61ff:fe2b:a029%2 Sep 12 17:06:08.015800 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 17:06:08.036337 update_engine[1993]: I20250912 17:06:08.030596 1993 main.cc:92] Flatcar Update Engine starting Sep 12 17:06:08.040999 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:06:08.054923 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:06:08.061863 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:06:08.067601 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:06:08.067601 ntpd[1987]: 12 Sep 17:06:08 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:06:08.061923 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:06:08.072635 update_engine[1993]: I20250912 17:06:08.068631 1993 update_check_scheduler.cc:74] Next update check in 10m18s Sep 12 17:06:08.072700 extend-filesystems[2036]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:06:08.072700 extend-filesystems[2036]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:06:08.072700 extend-filesystems[2036]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:06:08.092531 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:06:08.142748 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:06:08.146849 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:06:08.147380 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:06:08.166148 coreos-metadata[1981]: Sep 12 17:06:08.165 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:06:08.169416 coreos-metadata[1981]: Sep 12 17:06:08.169 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:06:08.171339 coreos-metadata[1981]: Sep 12 17:06:08.171 INFO Fetch successful Sep 12 17:06:08.171339 coreos-metadata[1981]: Sep 12 17:06:08.171 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:06:08.177520 coreos-metadata[1981]: Sep 12 17:06:08.177 INFO Fetch successful Sep 12 17:06:08.177520 coreos-metadata[1981]: Sep 12 17:06:08.177 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:06:08.181604 coreos-metadata[1981]: Sep 12 17:06:08.181 INFO Fetch successful Sep 12 17:06:08.181604 coreos-metadata[1981]: Sep 12 17:06:08.181 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:06:08.182784 coreos-metadata[1981]: Sep 12 17:06:08.182 INFO Fetch successful Sep 12 17:06:08.183161 coreos-metadata[1981]: Sep 12 17:06:08.182 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:06:08.184538 coreos-metadata[1981]: Sep 12 17:06:08.183 INFO Fetch failed with 404: resource not found Sep 12 17:06:08.185359 coreos-metadata[1981]: Sep 12 17:06:08.184 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:06:08.186233 coreos-metadata[1981]: Sep 12 17:06:08.186 INFO Fetch successful Sep 12 17:06:08.186233 coreos-metadata[1981]: Sep 12 17:06:08.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:06:08.187152 coreos-metadata[1981]: Sep 12 17:06:08.187 INFO Fetch successful Sep 12 17:06:08.187483 coreos-metadata[1981]: Sep 12 17:06:08.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:06:08.199175 coreos-metadata[1981]: Sep 12 17:06:08.198 INFO Fetch successful Sep 12 17:06:08.199175 coreos-metadata[1981]: Sep 12 17:06:08.198 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:06:08.199802 coreos-metadata[1981]: Sep 12 17:06:08.199 INFO Fetch successful Sep 12 17:06:08.199802 coreos-metadata[1981]: Sep 12 17:06:08.199 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:06:08.202474 coreos-metadata[1981]: Sep 12 17:06:08.202 INFO Fetch successful Sep 12 17:06:08.241055 systemd-logind[1992]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:06:08.241113 systemd-logind[1992]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 17:06:08.248512 bash[2073]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:06:08.252007 systemd-logind[1992]: New seat seat0. Sep 12 17:06:08.257451 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:06:08.262018 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:06:08.272491 systemd[1]: Starting sshkeys.service... Sep 12 17:06:08.349438 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:06:08.359153 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:06:08.414713 systemd-networkd[1897]: eth0: Gained IPv6LL Sep 12 17:06:08.481658 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:06:08.491013 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:06:08.502658 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:06:08.513579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:06:08.520878 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:06:08.525788 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:06:08.585632 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:06:08.795412 coreos-metadata[2105]: Sep 12 17:06:08.793 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:06:08.798334 coreos-metadata[2105]: Sep 12 17:06:08.796 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:06:08.799553 coreos-metadata[2105]: Sep 12 17:06:08.799 INFO Fetch successful Sep 12 17:06:08.806597 coreos-metadata[2105]: Sep 12 17:06:08.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:06:08.809643 coreos-metadata[2105]: Sep 12 17:06:08.806 INFO Fetch successful Sep 12 17:06:08.818486 unknown[2105]: wrote ssh authorized keys file for user: core Sep 12 17:06:08.834434 amazon-ssm-agent[2125]: Initializing new seelog logger Sep 12 17:06:08.834434 amazon-ssm-agent[2125]: New Seelog Logger Creation Complete Sep 12 17:06:08.834434 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.834434 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.834434 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 processing appconfig overrides Sep 12 17:06:08.837347 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.837347 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.837347 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 processing appconfig overrides Sep 12 17:06:08.837347 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.837347 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.837347 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 processing appconfig overrides Sep 12 17:06:08.839321 amazon-ssm-agent[2125]: 2025-09-12 17:06:08.8350 INFO Proxy environment variables: Sep 12 17:06:08.844334 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.844334 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:08.844334 amazon-ssm-agent[2125]: 2025/09/12 17:06:08 processing appconfig overrides Sep 12 17:06:08.858330 locksmithd[2059]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:06:08.869277 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:06:08.938306 amazon-ssm-agent[2125]: 2025-09-12 17:06:08.8355 INFO https_proxy: Sep 12 17:06:08.995407 update-ssh-keys[2175]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:06:09.000489 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:06:09.017024 systemd[1]: Finished sshkeys.service. Sep 12 17:06:09.047406 amazon-ssm-agent[2125]: 2025-09-12 17:06:08.8356 INFO http_proxy: Sep 12 17:06:09.152065 amazon-ssm-agent[2125]: 2025-09-12 17:06:08.8356 INFO no_proxy: Sep 12 17:06:09.263493 amazon-ssm-agent[2125]: 2025-09-12 17:06:08.8358 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:06:09.366690 amazon-ssm-agent[2125]: 2025-09-12 17:06:08.8359 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:06:09.434371 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:06:09.448687 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:06:09.449881 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2042 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:06:09.464448 containerd[2017]: time="2025-09-12T17:06:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:06:09.466268 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:06:09.476346 containerd[2017]: time="2025-09-12T17:06:09.471247142Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:06:09.476558 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.2847 INFO Agent will take identity from EC2 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.537678302Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.336µs" Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.537738278Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.537774410Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538075754Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538109450Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538158614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538272194Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538326842Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538759802Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538797566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538826786Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:06:09.539331 containerd[2017]: time="2025-09-12T17:06:09.538850870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:06:09.539931 containerd[2017]: time="2025-09-12T17:06:09.539035586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:06:09.542110 containerd[2017]: time="2025-09-12T17:06:09.542049326Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:06:09.546427 containerd[2017]: time="2025-09-12T17:06:09.544962278Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:06:09.546427 containerd[2017]: time="2025-09-12T17:06:09.545062190Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:06:09.546427 containerd[2017]: time="2025-09-12T17:06:09.545166794Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:06:09.546427 containerd[2017]: time="2025-09-12T17:06:09.545631218Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:06:09.546427 containerd[2017]: time="2025-09-12T17:06:09.545832590Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558151730Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558412838Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558463754Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558603170Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558649538Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558678458Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558710414Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558751598Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558780254Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:06:09.559327 containerd[2017]: time="2025-09-12T17:06:09.558807086Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:06:09.563191 containerd[2017]: time="2025-09-12T17:06:09.560170178Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:06:09.564321 containerd[2017]: time="2025-09-12T17:06:09.563782142Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:06:09.571331 containerd[2017]: time="2025-09-12T17:06:09.571129790Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:06:09.576340 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.2952 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 12 17:06:09.577001 containerd[2017]: time="2025-09-12T17:06:09.574459346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:06:09.577404 containerd[2017]: time="2025-09-12T17:06:09.577269974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:06:09.577669 containerd[2017]: time="2025-09-12T17:06:09.577375790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:06:09.581644 containerd[2017]: time="2025-09-12T17:06:09.577998134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:06:09.581644 containerd[2017]: time="2025-09-12T17:06:09.579435278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:06:09.581644 containerd[2017]: time="2025-09-12T17:06:09.579514106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:06:09.581644 containerd[2017]: time="2025-09-12T17:06:09.579548054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:06:09.581644 containerd[2017]: time="2025-09-12T17:06:09.579603590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:06:09.581644 containerd[2017]: time="2025-09-12T17:06:09.579642530Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:06:09.581644 containerd[2017]: time="2025-09-12T17:06:09.579698198Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:06:09.582399 containerd[2017]: time="2025-09-12T17:06:09.581596778Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:06:09.583367 containerd[2017]: time="2025-09-12T17:06:09.582798242Z" level=info msg="Start snapshots syncer" Sep 12 17:06:09.587321 containerd[2017]: time="2025-09-12T17:06:09.583031642Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:06:09.587321 containerd[2017]: time="2025-09-12T17:06:09.585922970Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:06:09.587642 containerd[2017]: time="2025-09-12T17:06:09.586033778Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:06:09.587642 containerd[2017]: time="2025-09-12T17:06:09.586212974Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:06:09.591711 containerd[2017]: time="2025-09-12T17:06:09.589327346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:06:09.591711 containerd[2017]: time="2025-09-12T17:06:09.591393386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:06:09.591711 containerd[2017]: time="2025-09-12T17:06:09.591476258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:06:09.591711 containerd[2017]: time="2025-09-12T17:06:09.591533126Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:06:09.591711 containerd[2017]: time="2025-09-12T17:06:09.591633338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:06:09.593832 containerd[2017]: time="2025-09-12T17:06:09.591672218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:06:09.593832 containerd[2017]: time="2025-09-12T17:06:09.592131002Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:06:09.593832 containerd[2017]: time="2025-09-12T17:06:09.593535506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:06:09.594582 containerd[2017]: time="2025-09-12T17:06:09.593585318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:06:09.594582 containerd[2017]: time="2025-09-12T17:06:09.594128042Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:06:09.594582 containerd[2017]: time="2025-09-12T17:06:09.594330914Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:06:09.594582 containerd[2017]: time="2025-09-12T17:06:09.594495350Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:06:09.594582 containerd[2017]: time="2025-09-12T17:06:09.594529886Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:06:09.594999 containerd[2017]: time="2025-09-12T17:06:09.594920462Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:06:09.595131 containerd[2017]: time="2025-09-12T17:06:09.594971822Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:06:09.595370 containerd[2017]: time="2025-09-12T17:06:09.595323866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:06:09.595647 containerd[2017]: time="2025-09-12T17:06:09.595561982Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:06:09.597072 containerd[2017]: time="2025-09-12T17:06:09.596782718Z" level=info msg="runtime interface created" Sep 12 17:06:09.597072 containerd[2017]: time="2025-09-12T17:06:09.596852510Z" level=info msg="created NRI interface" Sep 12 17:06:09.597072 containerd[2017]: time="2025-09-12T17:06:09.596906450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:06:09.597565 containerd[2017]: time="2025-09-12T17:06:09.597442742Z" level=info msg="Connect containerd service" Sep 12 17:06:09.601688 containerd[2017]: time="2025-09-12T17:06:09.599454458Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:06:09.609116 containerd[2017]: time="2025-09-12T17:06:09.607739222Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:06:09.674516 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.2952 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 17:06:09.781329 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.2952 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:06:09.784793 sshd_keygen[2024]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:06:09.848413 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:06:09.856998 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:06:09.865484 systemd[1]: Started sshd@0-172.31.16.146:22-139.178.68.195:37390.service - OpenSSH per-connection server daemon (139.178.68.195:37390). Sep 12 17:06:09.882335 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.3067 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 12 17:06:09.944099 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:06:09.945942 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:06:09.954379 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:06:09.969275 polkitd[2200]: Started polkitd version 126 Sep 12 17:06:09.982396 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.3067 INFO [Registrar] Starting registrar module Sep 12 17:06:10.054870 polkitd[2200]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:06:10.058670 polkitd[2200]: Loading rules from directory /run/polkit-1/rules.d Sep 12 17:06:10.058778 polkitd[2200]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:06:10.071406 polkitd[2200]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 12 17:06:10.071523 polkitd[2200]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:06:10.071616 polkitd[2200]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:06:10.077207 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:06:10.081919 polkitd[2200]: Finished loading, compiling and executing 2 rules Sep 12 17:06:10.084491 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.3197 INFO [EC2Identity] Checking disk for registration info Sep 12 17:06:10.087003 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:06:10.099854 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:06:10.103839 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:06:10.107481 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:06:10.106918 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:06:10.113950 polkitd[2200]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:06:10.159455 containerd[2017]: time="2025-09-12T17:06:10.158948797Z" level=info msg="Start subscribing containerd event" Sep 12 17:06:10.160782 containerd[2017]: time="2025-09-12T17:06:10.159643645Z" level=info msg="Start recovering state" Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.163670773Z" level=info msg="Start event monitor" Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.163758289Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.163804333Z" level=info msg="Start streaming server" Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.163830661Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.163848241Z" level=info msg="runtime interface starting up..." Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.163862569Z" level=info msg="starting plugins..." Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.163919653Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.164036101Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.164127529Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:06:10.164318 containerd[2017]: time="2025-09-12T17:06:10.164259781Z" level=info msg="containerd successfully booted in 0.708528s" Sep 12 17:06:10.164467 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:06:10.183466 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.3198 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 12 17:06:10.198728 systemd-hostnamed[2042]: Hostname set to (transient) Sep 12 17:06:10.199606 systemd-resolved[1898]: System hostname changed to 'ip-172-31-16-146'. Sep 12 17:06:10.283888 amazon-ssm-agent[2125]: 2025-09-12 17:06:09.3198 INFO [EC2Identity] Generating registration keypair Sep 12 17:06:10.373435 sshd[2222]: Accepted publickey for core from 139.178.68.195 port 37390 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:10.381217 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:10.403987 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:06:10.409708 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:06:10.444441 systemd-logind[1992]: New session 1 of user core. Sep 12 17:06:10.476339 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:06:10.488419 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:06:10.522815 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:06:10.532825 systemd-logind[1992]: New session c1 of user core. Sep 12 17:06:10.591661 tar[2003]: linux-arm64/README.md Sep 12 17:06:10.621476 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:06:10.771177 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.7709 INFO [EC2Identity] Checking write access before registering Sep 12 17:06:10.825802 amazon-ssm-agent[2125]: 2025/09/12 17:06:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:10.825802 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:06:10.826003 amazon-ssm-agent[2125]: 2025/09/12 17:06:10 processing appconfig overrides Sep 12 17:06:10.860916 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.7719 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 12 17:06:10.861674 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.8253 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:06:10.861824 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.8254 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 12 17:06:10.861824 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.8256 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:06:10.861824 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.8256 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:06:10.861824 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.8599 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:06:10.862022 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.8604 INFO [CredentialRefresher] Credentials ready Sep 12 17:06:10.871949 amazon-ssm-agent[2125]: 2025-09-12 17:06:10.8619 INFO [CredentialRefresher] Next credential rotation will be in 29.9999683182 minutes Sep 12 17:06:10.901961 systemd[2246]: Queued start job for default target default.target. Sep 12 17:06:10.907606 systemd[2246]: Created slice app.slice - User Application Slice. Sep 12 17:06:10.907691 systemd[2246]: Reached target paths.target - Paths. Sep 12 17:06:10.907799 systemd[2246]: Reached target timers.target - Timers. Sep 12 17:06:10.911548 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:06:10.942689 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:06:10.944267 systemd[2246]: Reached target sockets.target - Sockets. Sep 12 17:06:10.944442 systemd[2246]: Reached target basic.target - Basic System. Sep 12 17:06:10.944537 systemd[2246]: Reached target default.target - Main User Target. Sep 12 17:06:10.944605 systemd[2246]: Startup finished in 394ms. Sep 12 17:06:10.944893 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:06:10.957880 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:06:10.966381 ntpd[1987]: Listen normally on 6 eth0 [fe80::453:61ff:fe2b:a029%2]:123 Sep 12 17:06:10.967583 ntpd[1987]: 12 Sep 17:06:10 ntpd[1987]: Listen normally on 6 eth0 [fe80::453:61ff:fe2b:a029%2]:123 Sep 12 17:06:11.125549 systemd[1]: Started sshd@1-172.31.16.146:22-139.178.68.195:38008.service - OpenSSH per-connection server daemon (139.178.68.195:38008). Sep 12 17:06:11.340526 sshd[2260]: Accepted publickey for core from 139.178.68.195 port 38008 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:11.343102 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:11.352835 systemd-logind[1992]: New session 2 of user core. Sep 12 17:06:11.362665 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:06:11.494640 sshd[2263]: Connection closed by 139.178.68.195 port 38008 Sep 12 17:06:11.496646 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:11.505998 systemd[1]: sshd@1-172.31.16.146:22-139.178.68.195:38008.service: Deactivated successfully. Sep 12 17:06:11.510694 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:06:11.513707 systemd-logind[1992]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:06:11.533841 systemd[1]: Started sshd@2-172.31.16.146:22-139.178.68.195:38022.service - OpenSSH per-connection server daemon (139.178.68.195:38022). Sep 12 17:06:11.539865 systemd-logind[1992]: Removed session 2. Sep 12 17:06:11.735790 sshd[2269]: Accepted publickey for core from 139.178.68.195 port 38022 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:11.739009 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:11.750408 systemd-logind[1992]: New session 3 of user core. Sep 12 17:06:11.757614 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:06:11.890941 sshd[2272]: Connection closed by 139.178.68.195 port 38022 Sep 12 17:06:11.891972 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:11.903535 systemd[1]: sshd@2-172.31.16.146:22-139.178.68.195:38022.service: Deactivated successfully. Sep 12 17:06:11.905507 amazon-ssm-agent[2125]: 2025-09-12 17:06:11.9052 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:06:11.911894 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:06:11.914869 systemd-logind[1992]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:06:11.920077 systemd-logind[1992]: Removed session 3. Sep 12 17:06:12.007541 amazon-ssm-agent[2125]: 2025-09-12 17:06:11.9097 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2277) started Sep 12 17:06:12.044658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:06:12.052627 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:06:12.056965 systemd[1]: Startup finished in 3.682s (kernel) + 10.248s (initrd) + 11.183s (userspace) = 25.113s. Sep 12 17:06:12.066815 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:06:12.108500 amazon-ssm-agent[2125]: 2025-09-12 17:06:11.9098 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:06:13.721979 kubelet[2288]: E0912 17:06:13.721863 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:06:13.726525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:06:13.726852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:06:13.727596 systemd[1]: kubelet.service: Consumed 1.571s CPU time, 259M memory peak. Sep 12 17:06:15.123390 systemd-resolved[1898]: Clock change detected. Flushing caches. Sep 12 17:06:22.087046 systemd[1]: Started sshd@3-172.31.16.146:22-139.178.68.195:57818.service - OpenSSH per-connection server daemon (139.178.68.195:57818). Sep 12 17:06:22.277529 sshd[2307]: Accepted publickey for core from 139.178.68.195 port 57818 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:22.279878 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:22.288057 systemd-logind[1992]: New session 4 of user core. Sep 12 17:06:22.304029 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:06:22.428939 sshd[2310]: Connection closed by 139.178.68.195 port 57818 Sep 12 17:06:22.429948 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:22.435814 systemd-logind[1992]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:06:22.436048 systemd[1]: sshd@3-172.31.16.146:22-139.178.68.195:57818.service: Deactivated successfully. Sep 12 17:06:22.440141 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:06:22.446267 systemd-logind[1992]: Removed session 4. Sep 12 17:06:22.464710 systemd[1]: Started sshd@4-172.31.16.146:22-139.178.68.195:57832.service - OpenSSH per-connection server daemon (139.178.68.195:57832). Sep 12 17:06:22.655559 sshd[2316]: Accepted publickey for core from 139.178.68.195 port 57832 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:22.658040 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:22.666004 systemd-logind[1992]: New session 5 of user core. Sep 12 17:06:22.678100 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:06:22.798490 sshd[2319]: Connection closed by 139.178.68.195 port 57832 Sep 12 17:06:22.799464 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:22.807227 systemd[1]: sshd@4-172.31.16.146:22-139.178.68.195:57832.service: Deactivated successfully. Sep 12 17:06:22.811177 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:06:22.813392 systemd-logind[1992]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:06:22.817243 systemd-logind[1992]: Removed session 5. Sep 12 17:06:22.833953 systemd[1]: Started sshd@5-172.31.16.146:22-139.178.68.195:57848.service - OpenSSH per-connection server daemon (139.178.68.195:57848). Sep 12 17:06:23.020402 sshd[2325]: Accepted publickey for core from 139.178.68.195 port 57848 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:23.022910 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:23.031891 systemd-logind[1992]: New session 6 of user core. Sep 12 17:06:23.039059 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:06:23.163670 sshd[2328]: Connection closed by 139.178.68.195 port 57848 Sep 12 17:06:23.163382 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:23.170396 systemd[1]: sshd@5-172.31.16.146:22-139.178.68.195:57848.service: Deactivated successfully. Sep 12 17:06:23.174533 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:06:23.176259 systemd-logind[1992]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:06:23.179669 systemd-logind[1992]: Removed session 6. Sep 12 17:06:23.202000 systemd[1]: Started sshd@6-172.31.16.146:22-139.178.68.195:57858.service - OpenSSH per-connection server daemon (139.178.68.195:57858). Sep 12 17:06:23.397571 sshd[2334]: Accepted publickey for core from 139.178.68.195 port 57858 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:23.400481 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:23.410718 systemd-logind[1992]: New session 7 of user core. Sep 12 17:06:23.421094 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:06:23.591946 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:06:23.592671 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:06:23.609033 sudo[2338]: pam_unix(sudo:session): session closed for user root Sep 12 17:06:23.633811 sshd[2337]: Connection closed by 139.178.68.195 port 57858 Sep 12 17:06:23.633592 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:23.641672 systemd[1]: sshd@6-172.31.16.146:22-139.178.68.195:57858.service: Deactivated successfully. Sep 12 17:06:23.646167 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:06:23.648926 systemd-logind[1992]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:06:23.653719 systemd-logind[1992]: Removed session 7. Sep 12 17:06:23.669158 systemd[1]: Started sshd@7-172.31.16.146:22-139.178.68.195:57874.service - OpenSSH per-connection server daemon (139.178.68.195:57874). Sep 12 17:06:23.859899 sshd[2344]: Accepted publickey for core from 139.178.68.195 port 57874 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:23.862536 sshd-session[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:23.871138 systemd-logind[1992]: New session 8 of user core. Sep 12 17:06:23.884068 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:06:23.886600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:06:23.890493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:06:23.995066 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:06:23.995762 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:06:24.008088 sudo[2352]: pam_unix(sudo:session): session closed for user root Sep 12 17:06:24.019301 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:06:24.020159 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:06:24.041472 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:06:24.119390 augenrules[2374]: No rules Sep 12 17:06:24.122210 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:06:24.123351 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:06:24.125441 sudo[2351]: pam_unix(sudo:session): session closed for user root Sep 12 17:06:24.151592 sshd[2348]: Connection closed by 139.178.68.195 port 57874 Sep 12 17:06:24.151462 sshd-session[2344]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:24.161199 systemd[1]: sshd@7-172.31.16.146:22-139.178.68.195:57874.service: Deactivated successfully. Sep 12 17:06:24.169072 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:06:24.174875 systemd-logind[1992]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:06:24.195265 systemd[1]: Started sshd@8-172.31.16.146:22-139.178.68.195:57876.service - OpenSSH per-connection server daemon (139.178.68.195:57876). Sep 12 17:06:24.197432 systemd-logind[1992]: Removed session 8. Sep 12 17:06:24.325727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:06:24.341594 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:06:24.401630 sshd[2383]: Accepted publickey for core from 139.178.68.195 port 57876 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:06:24.405225 sshd-session[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:24.419380 systemd-logind[1992]: New session 9 of user core. Sep 12 17:06:24.426110 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:06:24.443077 kubelet[2391]: E0912 17:06:24.443011 2391 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:06:24.451417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:06:24.451819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:06:24.454904 systemd[1]: kubelet.service: Consumed 351ms CPU time, 106.5M memory peak. Sep 12 17:06:24.532233 sudo[2399]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:06:24.532939 sudo[2399]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:06:25.215191 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:06:25.230490 (dockerd)[2417]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:06:25.828716 dockerd[2417]: time="2025-09-12T17:06:25.828608361Z" level=info msg="Starting up" Sep 12 17:06:25.831253 dockerd[2417]: time="2025-09-12T17:06:25.831187017Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:06:25.853930 dockerd[2417]: time="2025-09-12T17:06:25.853847001Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:06:25.878987 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3763320002-merged.mount: Deactivated successfully. Sep 12 17:06:25.900487 systemd[1]: var-lib-docker-metacopy\x2dcheck4048816124-merged.mount: Deactivated successfully. Sep 12 17:06:25.918839 dockerd[2417]: time="2025-09-12T17:06:25.918377205Z" level=info msg="Loading containers: start." Sep 12 17:06:25.942838 kernel: Initializing XFRM netlink socket Sep 12 17:06:26.377860 (udev-worker)[2438]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:06:26.460488 systemd-networkd[1897]: docker0: Link UP Sep 12 17:06:26.467474 dockerd[2417]: time="2025-09-12T17:06:26.467329808Z" level=info msg="Loading containers: done." Sep 12 17:06:26.496815 dockerd[2417]: time="2025-09-12T17:06:26.496177592Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:06:26.496815 dockerd[2417]: time="2025-09-12T17:06:26.496302884Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:06:26.496815 dockerd[2417]: time="2025-09-12T17:06:26.496472660Z" level=info msg="Initializing buildkit" Sep 12 17:06:26.540136 dockerd[2417]: time="2025-09-12T17:06:26.540082508Z" level=info msg="Completed buildkit initialization" Sep 12 17:06:26.556000 dockerd[2417]: time="2025-09-12T17:06:26.555927860Z" level=info msg="Daemon has completed initialization" Sep 12 17:06:26.556328 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:06:26.558269 dockerd[2417]: time="2025-09-12T17:06:26.556292396Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:06:27.993558 containerd[2017]: time="2025-09-12T17:06:27.993493559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:06:28.647850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212179055.mount: Deactivated successfully. Sep 12 17:06:30.086817 containerd[2017]: time="2025-09-12T17:06:30.086604922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:30.089522 containerd[2017]: time="2025-09-12T17:06:30.088563298Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Sep 12 17:06:30.090799 containerd[2017]: time="2025-09-12T17:06:30.090680110Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:30.097557 containerd[2017]: time="2025-09-12T17:06:30.097453306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:30.100148 containerd[2017]: time="2025-09-12T17:06:30.099648610Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.106091955s" Sep 12 17:06:30.100148 containerd[2017]: time="2025-09-12T17:06:30.099729454Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:06:30.103384 containerd[2017]: time="2025-09-12T17:06:30.103136530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:06:31.573809 containerd[2017]: time="2025-09-12T17:06:31.573714577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:31.576343 containerd[2017]: time="2025-09-12T17:06:31.575813413Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Sep 12 17:06:31.578284 containerd[2017]: time="2025-09-12T17:06:31.578206609Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:31.584008 containerd[2017]: time="2025-09-12T17:06:31.583932433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:31.586519 containerd[2017]: time="2025-09-12T17:06:31.586450873Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.483054051s" Sep 12 17:06:31.586911 containerd[2017]: time="2025-09-12T17:06:31.586724509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:06:31.587738 containerd[2017]: time="2025-09-12T17:06:31.587382433Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:06:32.883242 containerd[2017]: time="2025-09-12T17:06:32.883169776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:32.885325 containerd[2017]: time="2025-09-12T17:06:32.885256756Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Sep 12 17:06:32.886530 containerd[2017]: time="2025-09-12T17:06:32.886426324Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:32.891991 containerd[2017]: time="2025-09-12T17:06:32.891892816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:32.894394 containerd[2017]: time="2025-09-12T17:06:32.894183580Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.306739323s" Sep 12 17:06:32.894394 containerd[2017]: time="2025-09-12T17:06:32.894247804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:06:32.895072 containerd[2017]: time="2025-09-12T17:06:32.895020172Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:06:34.406305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977629317.mount: Deactivated successfully. Sep 12 17:06:34.510622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:06:34.515633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:06:34.992097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:06:35.009818 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:06:35.133421 kubelet[2711]: E0912 17:06:35.133328 2711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:06:35.141494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:06:35.142868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:06:35.143593 systemd[1]: kubelet.service: Consumed 371ms CPU time, 106.8M memory peak. Sep 12 17:06:35.366864 containerd[2017]: time="2025-09-12T17:06:35.366756100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:35.368629 containerd[2017]: time="2025-09-12T17:06:35.368116180Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Sep 12 17:06:35.370000 containerd[2017]: time="2025-09-12T17:06:35.369916924Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:35.374420 containerd[2017]: time="2025-09-12T17:06:35.374358076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:35.375945 containerd[2017]: time="2025-09-12T17:06:35.375858340Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 2.480616108s" Sep 12 17:06:35.375945 containerd[2017]: time="2025-09-12T17:06:35.375934972Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:06:35.379830 containerd[2017]: time="2025-09-12T17:06:35.378501112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:06:35.884041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85298906.mount: Deactivated successfully. Sep 12 17:06:37.162803 containerd[2017]: time="2025-09-12T17:06:37.162591593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:37.169536 containerd[2017]: time="2025-09-12T17:06:37.169058981Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 12 17:06:37.176674 containerd[2017]: time="2025-09-12T17:06:37.176465933Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:37.191157 containerd[2017]: time="2025-09-12T17:06:37.191085989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:37.193802 containerd[2017]: time="2025-09-12T17:06:37.193329389Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.814733105s" Sep 12 17:06:37.193802 containerd[2017]: time="2025-09-12T17:06:37.193406333Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:06:37.195147 containerd[2017]: time="2025-09-12T17:06:37.195066905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:06:37.700545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808884626.mount: Deactivated successfully. Sep 12 17:06:37.708857 containerd[2017]: time="2025-09-12T17:06:37.708781940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:06:37.710695 containerd[2017]: time="2025-09-12T17:06:37.710632172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:06:37.712313 containerd[2017]: time="2025-09-12T17:06:37.712247024Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:06:37.718069 containerd[2017]: time="2025-09-12T17:06:37.717972284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:06:37.720820 containerd[2017]: time="2025-09-12T17:06:37.720737576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.256995ms" Sep 12 17:06:37.720820 containerd[2017]: time="2025-09-12T17:06:37.720810680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:06:37.721877 containerd[2017]: time="2025-09-12T17:06:37.721552640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:06:38.251081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449927667.mount: Deactivated successfully. Sep 12 17:06:40.380341 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:06:40.633236 containerd[2017]: time="2025-09-12T17:06:40.632739154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:40.636910 containerd[2017]: time="2025-09-12T17:06:40.636840874Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Sep 12 17:06:40.639492 containerd[2017]: time="2025-09-12T17:06:40.639397282Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:40.645843 containerd[2017]: time="2025-09-12T17:06:40.645427630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:40.648076 containerd[2017]: time="2025-09-12T17:06:40.647807062Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.926202534s" Sep 12 17:06:40.648076 containerd[2017]: time="2025-09-12T17:06:40.647875090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:06:45.260658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:06:45.264069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:06:45.626029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:06:45.640241 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:06:45.715083 kubelet[2861]: E0912 17:06:45.715025 2861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:06:45.720412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:06:45.720987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:06:45.722032 systemd[1]: kubelet.service: Consumed 300ms CPU time, 106.3M memory peak. Sep 12 17:06:52.263680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:06:52.265946 systemd[1]: kubelet.service: Consumed 300ms CPU time, 106.3M memory peak. Sep 12 17:06:52.280054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:06:52.329501 systemd[1]: Reload requested from client PID 2876 ('systemctl') (unit session-9.scope)... Sep 12 17:06:52.329541 systemd[1]: Reloading... Sep 12 17:06:52.610844 zram_generator::config[2923]: No configuration found. Sep 12 17:06:53.111956 update_engine[1993]: I20250912 17:06:53.111835 1993 update_attempter.cc:509] Updating boot flags... Sep 12 17:06:53.146810 systemd[1]: Reloading finished in 816 ms. Sep 12 17:06:53.264543 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:06:53.264732 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:06:53.265264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:06:53.265337 systemd[1]: kubelet.service: Consumed 269ms CPU time, 95M memory peak. Sep 12 17:06:53.269244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:06:53.926072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:06:53.976864 (kubelet)[3166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:06:54.164654 kubelet[3166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:06:54.166898 kubelet[3166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:06:54.168805 kubelet[3166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:06:54.168805 kubelet[3166]: I0912 17:06:54.167211 3166 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:06:55.520110 kubelet[3166]: I0912 17:06:55.520059 3166 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:06:55.520846 kubelet[3166]: I0912 17:06:55.520816 3166 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:06:55.522321 kubelet[3166]: I0912 17:06:55.522279 3166 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:06:55.575748 kubelet[3166]: E0912 17:06:55.575690 3166 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:06:55.577689 kubelet[3166]: I0912 17:06:55.577634 3166 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:06:55.592361 kubelet[3166]: I0912 17:06:55.592328 3166 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:06:55.598503 kubelet[3166]: I0912 17:06:55.598448 3166 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:06:55.599169 kubelet[3166]: I0912 17:06:55.599113 3166 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:06:55.599501 kubelet[3166]: I0912 17:06:55.599168 3166 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:06:55.599694 kubelet[3166]: I0912 17:06:55.599626 3166 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:06:55.599694 kubelet[3166]: I0912 17:06:55.599649 3166 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:06:55.601493 kubelet[3166]: I0912 17:06:55.601438 3166 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:06:55.607607 kubelet[3166]: I0912 17:06:55.607537 3166 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:06:55.607607 kubelet[3166]: I0912 17:06:55.607586 3166 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:06:55.610080 kubelet[3166]: I0912 17:06:55.610019 3166 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:06:55.612526 kubelet[3166]: I0912 17:06:55.612371 3166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:06:55.616673 kubelet[3166]: E0912 17:06:55.616600 3166 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-146&limit=500&resourceVersion=0\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:06:55.621024 kubelet[3166]: E0912 17:06:55.620950 3166 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:06:55.621294 kubelet[3166]: I0912 17:06:55.621120 3166 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:06:55.622411 kubelet[3166]: I0912 17:06:55.622357 3166 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:06:55.623303 kubelet[3166]: W0912 17:06:55.622601 3166 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:06:55.639139 kubelet[3166]: I0912 17:06:55.639105 3166 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:06:55.639363 kubelet[3166]: I0912 17:06:55.639344 3166 server.go:1289] "Started kubelet" Sep 12 17:06:55.644528 kubelet[3166]: I0912 17:06:55.644467 3166 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:06:55.647381 kubelet[3166]: I0912 17:06:55.647336 3166 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:06:55.648375 kubelet[3166]: I0912 17:06:55.648318 3166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:06:55.649832 kubelet[3166]: I0912 17:06:55.649516 3166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:06:55.650143 kubelet[3166]: I0912 17:06:55.650115 3166 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:06:55.652662 kubelet[3166]: E0912 17:06:55.650390 3166 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.146:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-146.186497f0ef03d1d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-146,UID:ip-172-31-16-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-146,},FirstTimestamp:2025-09-12 17:06:55.639286225 +0000 UTC m=+1.629699033,LastTimestamp:2025-09-12 17:06:55.639286225 +0000 UTC m=+1.629699033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-146,}" Sep 12 17:06:55.653947 kubelet[3166]: I0912 17:06:55.653880 3166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:06:55.659329 kubelet[3166]: E0912 17:06:55.659268 3166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-146\" not found" Sep 12 17:06:55.659620 kubelet[3166]: I0912 17:06:55.659454 3166 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:06:55.660431 kubelet[3166]: I0912 17:06:55.660343 3166 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:06:55.660746 kubelet[3166]: I0912 17:06:55.660724 3166 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:06:55.661975 kubelet[3166]: E0912 17:06:55.661912 3166 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:06:55.662786 kubelet[3166]: I0912 17:06:55.662485 3166 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:06:55.662786 kubelet[3166]: I0912 17:06:55.662641 3166 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:06:55.665628 kubelet[3166]: E0912 17:06:55.665491 3166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-146?timeout=10s\": dial tcp 172.31.16.146:6443: connect: connection refused" interval="200ms" Sep 12 17:06:55.667402 kubelet[3166]: E0912 17:06:55.667352 3166 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:06:55.667624 kubelet[3166]: I0912 17:06:55.667600 3166 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:06:55.697188 kubelet[3166]: I0912 17:06:55.697121 3166 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:06:55.697435 kubelet[3166]: I0912 17:06:55.697148 3166 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:06:55.697435 kubelet[3166]: I0912 17:06:55.697385 3166 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:06:55.700305 kubelet[3166]: I0912 17:06:55.700237 3166 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:06:55.701979 kubelet[3166]: I0912 17:06:55.701903 3166 policy_none.go:49] "None policy: Start" Sep 12 17:06:55.701979 kubelet[3166]: I0912 17:06:55.701945 3166 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:06:55.703575 kubelet[3166]: I0912 17:06:55.703177 3166 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:06:55.703991 kubelet[3166]: I0912 17:06:55.703944 3166 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:06:55.704105 kubelet[3166]: I0912 17:06:55.703997 3166 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:06:55.704105 kubelet[3166]: I0912 17:06:55.704036 3166 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:06:55.704105 kubelet[3166]: I0912 17:06:55.704050 3166 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:06:55.704239 kubelet[3166]: E0912 17:06:55.704112 3166 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:06:55.709002 kubelet[3166]: E0912 17:06:55.708958 3166 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:06:55.717602 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:06:55.737397 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:06:55.760109 kubelet[3166]: E0912 17:06:55.760056 3166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-146\" not found" Sep 12 17:06:55.764728 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:06:55.768489 kubelet[3166]: E0912 17:06:55.768453 3166 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:06:55.769116 kubelet[3166]: I0912 17:06:55.769086 3166 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:06:55.769376 kubelet[3166]: I0912 17:06:55.769305 3166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:06:55.772255 kubelet[3166]: I0912 17:06:55.770246 3166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:06:55.773736 kubelet[3166]: E0912 17:06:55.773682 3166 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:06:55.773998 kubelet[3166]: E0912 17:06:55.773754 3166 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-146\" not found" Sep 12 17:06:55.831184 systemd[1]: Created slice kubepods-burstable-pod9118e3f9bae23900ce07cacccdf613a4.slice - libcontainer container kubepods-burstable-pod9118e3f9bae23900ce07cacccdf613a4.slice. Sep 12 17:06:55.858310 kubelet[3166]: E0912 17:06:55.857201 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:55.862495 kubelet[3166]: I0912 17:06:55.862455 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3bf2087688848356ab688bb0164fce-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-146\" (UID: \"dd3bf2087688848356ab688bb0164fce\") " pod="kube-system/kube-scheduler-ip-172-31-16-146" Sep 12 17:06:55.862793 kubelet[3166]: I0912 17:06:55.862751 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/770e95d82dc1d56c1aef01a40d72b363-ca-certs\") pod \"kube-apiserver-ip-172-31-16-146\" (UID: \"770e95d82dc1d56c1aef01a40d72b363\") " pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:06:55.863117 kubelet[3166]: I0912 17:06:55.863033 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:06:55.863349 kubelet[3166]: I0912 17:06:55.863266 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:06:55.864799 kubelet[3166]: I0912 17:06:55.863996 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:06:55.864799 kubelet[3166]: I0912 17:06:55.864078 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:06:55.864799 kubelet[3166]: I0912 17:06:55.864148 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:06:55.867112 kubelet[3166]: E0912 17:06:55.867024 3166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-146?timeout=10s\": dial tcp 172.31.16.146:6443: connect: connection refused" interval="400ms" Sep 12 17:06:55.868302 systemd[1]: Created slice kubepods-burstable-poddd3bf2087688848356ab688bb0164fce.slice - libcontainer container kubepods-burstable-poddd3bf2087688848356ab688bb0164fce.slice. Sep 12 17:06:55.876817 kubelet[3166]: I0912 17:06:55.876743 3166 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-146" Sep 12 17:06:55.878096 kubelet[3166]: E0912 17:06:55.878011 3166 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.146:6443/api/v1/nodes\": dial tcp 172.31.16.146:6443: connect: connection refused" node="ip-172-31-16-146" Sep 12 17:06:55.879492 kubelet[3166]: E0912 17:06:55.879412 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:55.887277 systemd[1]: Created slice kubepods-burstable-pod770e95d82dc1d56c1aef01a40d72b363.slice - libcontainer container kubepods-burstable-pod770e95d82dc1d56c1aef01a40d72b363.slice. Sep 12 17:06:55.891231 kubelet[3166]: E0912 17:06:55.890889 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:55.965506 kubelet[3166]: I0912 17:06:55.965453 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/770e95d82dc1d56c1aef01a40d72b363-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-146\" (UID: \"770e95d82dc1d56c1aef01a40d72b363\") " pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:06:55.966073 kubelet[3166]: I0912 17:06:55.966006 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/770e95d82dc1d56c1aef01a40d72b363-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-146\" (UID: \"770e95d82dc1d56c1aef01a40d72b363\") " pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:06:56.081128 kubelet[3166]: I0912 17:06:56.080989 3166 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-146" Sep 12 17:06:56.081743 kubelet[3166]: E0912 17:06:56.081504 3166 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.146:6443/api/v1/nodes\": dial tcp 172.31.16.146:6443: connect: connection refused" node="ip-172-31-16-146" Sep 12 17:06:56.158700 containerd[2017]: time="2025-09-12T17:06:56.158640707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-146,Uid:9118e3f9bae23900ce07cacccdf613a4,Namespace:kube-system,Attempt:0,}" Sep 12 17:06:56.183238 containerd[2017]: time="2025-09-12T17:06:56.182627795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-146,Uid:dd3bf2087688848356ab688bb0164fce,Namespace:kube-system,Attempt:0,}" Sep 12 17:06:56.192635 containerd[2017]: time="2025-09-12T17:06:56.192571175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-146,Uid:770e95d82dc1d56c1aef01a40d72b363,Namespace:kube-system,Attempt:0,}" Sep 12 17:06:56.198192 containerd[2017]: time="2025-09-12T17:06:56.198113352Z" level=info msg="connecting to shim ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429" address="unix:///run/containerd/s/93a60b6ba735c7604c03740726806d83ab8a577e5b0984ccea147e239ff63e99" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:06:56.266181 containerd[2017]: time="2025-09-12T17:06:56.266096976Z" level=info msg="connecting to shim 18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d" address="unix:///run/containerd/s/ce688d1e8a28820cc3d35919d2c973bf7e1df8aadd39fe674a6d5e8c6988da3a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:06:56.268678 kubelet[3166]: E0912 17:06:56.268623 3166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-146?timeout=10s\": dial tcp 172.31.16.146:6443: connect: connection refused" interval="800ms" Sep 12 17:06:56.274413 containerd[2017]: time="2025-09-12T17:06:56.273034692Z" level=info msg="connecting to shim 93daae20fd8fef92f3052e432439638d0134a13f3f28c8e528d917ff61fb5b5f" address="unix:///run/containerd/s/ec7239b5e778bca2f9d7566d8cba730a9a1f77bab435e21f3fc2860fd2295976" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:06:56.284255 systemd[1]: Started cri-containerd-ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429.scope - libcontainer container ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429. Sep 12 17:06:56.373710 systemd[1]: Started cri-containerd-18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d.scope - libcontainer container 18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d. Sep 12 17:06:56.382363 systemd[1]: Started cri-containerd-93daae20fd8fef92f3052e432439638d0134a13f3f28c8e528d917ff61fb5b5f.scope - libcontainer container 93daae20fd8fef92f3052e432439638d0134a13f3f28c8e528d917ff61fb5b5f. Sep 12 17:06:56.434625 containerd[2017]: time="2025-09-12T17:06:56.434425177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-146,Uid:9118e3f9bae23900ce07cacccdf613a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429\"" Sep 12 17:06:56.454860 containerd[2017]: time="2025-09-12T17:06:56.454732489Z" level=info msg="CreateContainer within sandbox \"ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:06:56.490937 kubelet[3166]: I0912 17:06:56.490511 3166 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-146" Sep 12 17:06:56.492105 containerd[2017]: time="2025-09-12T17:06:56.491299237Z" level=info msg="Container 605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:06:56.492220 kubelet[3166]: E0912 17:06:56.492049 3166 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.146:6443/api/v1/nodes\": dial tcp 172.31.16.146:6443: connect: connection refused" node="ip-172-31-16-146" Sep 12 17:06:56.504836 kubelet[3166]: E0912 17:06:56.504736 3166 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:06:56.519466 containerd[2017]: time="2025-09-12T17:06:56.519247153Z" level=info msg="CreateContainer within sandbox \"ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32\"" Sep 12 17:06:56.523890 containerd[2017]: time="2025-09-12T17:06:56.523692217Z" level=info msg="StartContainer for \"605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32\"" Sep 12 17:06:56.527473 containerd[2017]: time="2025-09-12T17:06:56.527404849Z" level=info msg="connecting to shim 605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32" address="unix:///run/containerd/s/93a60b6ba735c7604c03740726806d83ab8a577e5b0984ccea147e239ff63e99" protocol=ttrpc version=3 Sep 12 17:06:56.532406 containerd[2017]: time="2025-09-12T17:06:56.532314505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-146,Uid:dd3bf2087688848356ab688bb0164fce,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d\"" Sep 12 17:06:56.542864 containerd[2017]: time="2025-09-12T17:06:56.541693297Z" level=info msg="CreateContainer within sandbox \"18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:06:56.565584 containerd[2017]: time="2025-09-12T17:06:56.565514437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-146,Uid:770e95d82dc1d56c1aef01a40d72b363,Namespace:kube-system,Attempt:0,} returns sandbox id \"93daae20fd8fef92f3052e432439638d0134a13f3f28c8e528d917ff61fb5b5f\"" Sep 12 17:06:56.571580 systemd[1]: Started cri-containerd-605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32.scope - libcontainer container 605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32. Sep 12 17:06:56.572127 containerd[2017]: time="2025-09-12T17:06:56.572058925Z" level=info msg="Container da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:06:56.578290 containerd[2017]: time="2025-09-12T17:06:56.578240605Z" level=info msg="CreateContainer within sandbox \"93daae20fd8fef92f3052e432439638d0134a13f3f28c8e528d917ff61fb5b5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:06:56.596252 containerd[2017]: time="2025-09-12T17:06:56.596171377Z" level=info msg="CreateContainer within sandbox \"18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e\"" Sep 12 17:06:56.598525 containerd[2017]: time="2025-09-12T17:06:56.598437626Z" level=info msg="StartContainer for \"da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e\"" Sep 12 17:06:56.600183 containerd[2017]: time="2025-09-12T17:06:56.600081086Z" level=info msg="Container 0861db42e7f1fc3819949ff493d6bc92793e793e2e6bfb4828fda30ef1f5d91b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:06:56.602391 containerd[2017]: time="2025-09-12T17:06:56.602313686Z" level=info msg="connecting to shim da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e" address="unix:///run/containerd/s/ce688d1e8a28820cc3d35919d2c973bf7e1df8aadd39fe674a6d5e8c6988da3a" protocol=ttrpc version=3 Sep 12 17:06:56.625282 containerd[2017]: time="2025-09-12T17:06:56.624096350Z" level=info msg="CreateContainer within sandbox \"93daae20fd8fef92f3052e432439638d0134a13f3f28c8e528d917ff61fb5b5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0861db42e7f1fc3819949ff493d6bc92793e793e2e6bfb4828fda30ef1f5d91b\"" Sep 12 17:06:56.628674 containerd[2017]: time="2025-09-12T17:06:56.628582838Z" level=info msg="StartContainer for \"0861db42e7f1fc3819949ff493d6bc92793e793e2e6bfb4828fda30ef1f5d91b\"" Sep 12 17:06:56.632356 containerd[2017]: time="2025-09-12T17:06:56.632285582Z" level=info msg="connecting to shim 0861db42e7f1fc3819949ff493d6bc92793e793e2e6bfb4828fda30ef1f5d91b" address="unix:///run/containerd/s/ec7239b5e778bca2f9d7566d8cba730a9a1f77bab435e21f3fc2860fd2295976" protocol=ttrpc version=3 Sep 12 17:06:56.676117 systemd[1]: Started cri-containerd-da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e.scope - libcontainer container da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e. Sep 12 17:06:56.691268 systemd[1]: Started cri-containerd-0861db42e7f1fc3819949ff493d6bc92793e793e2e6bfb4828fda30ef1f5d91b.scope - libcontainer container 0861db42e7f1fc3819949ff493d6bc92793e793e2e6bfb4828fda30ef1f5d91b. Sep 12 17:06:56.782089 containerd[2017]: time="2025-09-12T17:06:56.781961606Z" level=info msg="StartContainer for \"605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32\" returns successfully" Sep 12 17:06:56.784023 kubelet[3166]: E0912 17:06:56.783830 3166 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:06:56.883363 containerd[2017]: time="2025-09-12T17:06:56.883144683Z" level=info msg="StartContainer for \"0861db42e7f1fc3819949ff493d6bc92793e793e2e6bfb4828fda30ef1f5d91b\" returns successfully" Sep 12 17:06:56.913931 containerd[2017]: time="2025-09-12T17:06:56.912254091Z" level=info msg="StartContainer for \"da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e\" returns successfully" Sep 12 17:06:56.945812 kubelet[3166]: E0912 17:06:56.945708 3166 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-146&limit=500&resourceVersion=0\": dial tcp 172.31.16.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:06:57.294844 kubelet[3166]: I0912 17:06:57.294655 3166 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-146" Sep 12 17:06:57.775912 kubelet[3166]: E0912 17:06:57.775862 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:57.784822 kubelet[3166]: E0912 17:06:57.784739 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:57.796183 kubelet[3166]: E0912 17:06:57.795893 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:58.798975 kubelet[3166]: E0912 17:06:58.798573 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:58.801318 kubelet[3166]: E0912 17:06:58.800015 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:58.802322 kubelet[3166]: E0912 17:06:58.802269 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:59.802855 kubelet[3166]: E0912 17:06:59.802559 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:59.805876 kubelet[3166]: E0912 17:06:59.802679 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:06:59.806142 kubelet[3166]: E0912 17:06:59.804019 3166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-146\" not found" node="ip-172-31-16-146" Sep 12 17:07:01.336799 kubelet[3166]: I0912 17:07:01.336177 3166 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-146" Sep 12 17:07:01.365874 kubelet[3166]: I0912 17:07:01.365398 3166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:01.416516 kubelet[3166]: E0912 17:07:01.416328 3166 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-146.186497f0ef03d1d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-146,UID:ip-172-31-16-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-146,},FirstTimestamp:2025-09-12 17:06:55.639286225 +0000 UTC m=+1.629699033,LastTimestamp:2025-09-12 17:06:55.639286225 +0000 UTC m=+1.629699033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-146,}" Sep 12 17:07:01.491305 kubelet[3166]: E0912 17:07:01.491230 3166 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Sep 12 17:07:01.535908 kubelet[3166]: E0912 17:07:01.535515 3166 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-146.186497f0f0af99c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-146,UID:ip-172-31-16-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-16-146,},FirstTimestamp:2025-09-12 17:06:55.667321285 +0000 UTC m=+1.657734117,LastTimestamp:2025-09-12 17:06:55.667321285 +0000 UTC m=+1.657734117,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-146,}" Sep 12 17:07:01.535908 kubelet[3166]: E0912 17:07:01.535647 3166 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-146\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:01.535908 kubelet[3166]: I0912 17:07:01.535707 3166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:01.544222 kubelet[3166]: E0912 17:07:01.542937 3166 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-146\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:01.544222 kubelet[3166]: I0912 17:07:01.542992 3166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-146" Sep 12 17:07:01.552804 kubelet[3166]: E0912 17:07:01.551362 3166 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-146\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-146" Sep 12 17:07:01.626290 kubelet[3166]: I0912 17:07:01.625632 3166 apiserver.go:52] "Watching apiserver" Sep 12 17:07:01.660859 kubelet[3166]: I0912 17:07:01.660748 3166 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:07:02.203272 kubelet[3166]: I0912 17:07:02.202915 3166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:03.971244 systemd[1]: Reload requested from client PID 3541 ('systemctl') (unit session-9.scope)... Sep 12 17:07:03.971274 systemd[1]: Reloading... Sep 12 17:07:04.168827 zram_generator::config[3588]: No configuration found. Sep 12 17:07:04.681471 systemd[1]: Reloading finished in 709 ms. Sep 12 17:07:04.724162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:04.740001 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:07:04.740615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:04.740705 systemd[1]: kubelet.service: Consumed 2.272s CPU time, 127.9M memory peak. Sep 12 17:07:04.746412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:05.250472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:05.268533 (kubelet)[3645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:07:05.364824 kubelet[3645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:07:05.364824 kubelet[3645]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:07:05.365406 kubelet[3645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:07:05.365406 kubelet[3645]: I0912 17:07:05.365055 3645 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:07:05.381042 kubelet[3645]: I0912 17:07:05.380593 3645 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:07:05.381167 kubelet[3645]: I0912 17:07:05.381078 3645 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:07:05.383714 kubelet[3645]: I0912 17:07:05.383655 3645 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:07:05.388607 kubelet[3645]: I0912 17:07:05.388535 3645 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:07:05.397006 kubelet[3645]: I0912 17:07:05.396811 3645 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:07:05.416257 kubelet[3645]: I0912 17:07:05.416180 3645 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:07:05.424320 kubelet[3645]: I0912 17:07:05.423854 3645 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:07:05.425292 kubelet[3645]: I0912 17:07:05.425231 3645 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:07:05.425693 kubelet[3645]: I0912 17:07:05.425417 3645 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:07:05.426375 kubelet[3645]: I0912 17:07:05.425960 3645 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:07:05.426375 kubelet[3645]: I0912 17:07:05.425989 3645 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:07:05.426375 kubelet[3645]: I0912 17:07:05.426070 3645 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:07:05.428015 kubelet[3645]: I0912 17:07:05.426636 3645 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:07:05.428162 kubelet[3645]: I0912 17:07:05.428026 3645 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:07:05.428162 kubelet[3645]: I0912 17:07:05.428087 3645 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:07:05.428162 kubelet[3645]: I0912 17:07:05.428116 3645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:07:05.433830 kubelet[3645]: I0912 17:07:05.432824 3645 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:07:05.435707 kubelet[3645]: I0912 17:07:05.435258 3645 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:07:05.444803 kubelet[3645]: I0912 17:07:05.443386 3645 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:07:05.444803 kubelet[3645]: I0912 17:07:05.443455 3645 server.go:1289] "Started kubelet" Sep 12 17:07:05.446690 kubelet[3645]: I0912 17:07:05.446612 3645 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:07:05.450211 kubelet[3645]: I0912 17:07:05.449432 3645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:07:05.450560 kubelet[3645]: I0912 17:07:05.450507 3645 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:07:05.463620 kubelet[3645]: I0912 17:07:05.463505 3645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:07:05.463932 kubelet[3645]: I0912 17:07:05.463886 3645 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:07:05.472198 kubelet[3645]: I0912 17:07:05.471830 3645 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:07:05.479323 kubelet[3645]: I0912 17:07:05.476576 3645 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:07:05.479323 kubelet[3645]: I0912 17:07:05.476992 3645 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:07:05.479323 kubelet[3645]: E0912 17:07:05.477191 3645 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-146\" not found" Sep 12 17:07:05.489803 kubelet[3645]: I0912 17:07:05.489686 3645 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:07:05.505892 kubelet[3645]: I0912 17:07:05.505041 3645 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:07:05.510832 kubelet[3645]: I0912 17:07:05.510620 3645 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:07:05.555072 kubelet[3645]: I0912 17:07:05.554012 3645 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:07:05.577505 kubelet[3645]: E0912 17:07:05.577402 3645 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:07:05.585578 kubelet[3645]: I0912 17:07:05.582663 3645 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:07:05.586533 kubelet[3645]: I0912 17:07:05.586434 3645 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:07:05.586533 kubelet[3645]: I0912 17:07:05.586486 3645 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:07:05.586533 kubelet[3645]: I0912 17:07:05.586528 3645 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:07:05.586533 kubelet[3645]: I0912 17:07:05.586545 3645 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:07:05.586959 kubelet[3645]: E0912 17:07:05.586625 3645 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:07:05.687674 kubelet[3645]: E0912 17:07:05.687627 3645 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.712948 3645 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.712982 3645 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.713036 3645 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.713299 3645 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.713322 3645 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.713360 3645 policy_none.go:49] "None policy: Start" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.713382 3645 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.713428 3645 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:07:05.714845 kubelet[3645]: I0912 17:07:05.713630 3645 state_mem.go:75] "Updated machine memory state" Sep 12 17:07:05.729625 kubelet[3645]: E0912 17:07:05.729579 3645 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:07:05.730148 kubelet[3645]: I0912 17:07:05.730113 3645 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:07:05.730339 kubelet[3645]: I0912 17:07:05.730285 3645 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:07:05.731341 kubelet[3645]: I0912 17:07:05.731200 3645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:07:05.744629 kubelet[3645]: E0912 17:07:05.742925 3645 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:07:05.864424 kubelet[3645]: I0912 17:07:05.864338 3645 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-146" Sep 12 17:07:05.885151 kubelet[3645]: I0912 17:07:05.885008 3645 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-146" Sep 12 17:07:05.886215 kubelet[3645]: I0912 17:07:05.886142 3645 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-146" Sep 12 17:07:05.894172 kubelet[3645]: I0912 17:07:05.891474 3645 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-146" Sep 12 17:07:05.894172 kubelet[3645]: I0912 17:07:05.893266 3645 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:05.896347 kubelet[3645]: I0912 17:07:05.894956 3645 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:05.918709 kubelet[3645]: E0912 17:07:05.918638 3645 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-146\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:05.994754 kubelet[3645]: I0912 17:07:05.994684 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/770e95d82dc1d56c1aef01a40d72b363-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-146\" (UID: \"770e95d82dc1d56c1aef01a40d72b363\") " pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:05.994923 kubelet[3645]: I0912 17:07:05.994762 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:05.995085 kubelet[3645]: I0912 17:07:05.994979 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:05.995917 kubelet[3645]: I0912 17:07:05.995106 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3bf2087688848356ab688bb0164fce-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-146\" (UID: \"dd3bf2087688848356ab688bb0164fce\") " pod="kube-system/kube-scheduler-ip-172-31-16-146" Sep 12 17:07:05.996065 kubelet[3645]: I0912 17:07:05.995997 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/770e95d82dc1d56c1aef01a40d72b363-ca-certs\") pod \"kube-apiserver-ip-172-31-16-146\" (UID: \"770e95d82dc1d56c1aef01a40d72b363\") " pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:05.996340 kubelet[3645]: I0912 17:07:05.996103 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/770e95d82dc1d56c1aef01a40d72b363-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-146\" (UID: \"770e95d82dc1d56c1aef01a40d72b363\") " pod="kube-system/kube-apiserver-ip-172-31-16-146" Sep 12 17:07:05.996340 kubelet[3645]: I0912 17:07:05.996157 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:05.996340 kubelet[3645]: I0912 17:07:05.996196 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:05.996340 kubelet[3645]: I0912 17:07:05.996235 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9118e3f9bae23900ce07cacccdf613a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-146\" (UID: \"9118e3f9bae23900ce07cacccdf613a4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-146" Sep 12 17:07:06.430127 kubelet[3645]: I0912 17:07:06.429508 3645 apiserver.go:52] "Watching apiserver" Sep 12 17:07:06.478897 kubelet[3645]: I0912 17:07:06.478738 3645 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:07:06.590200 kubelet[3645]: I0912 17:07:06.589408 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-146" podStartSLOduration=4.589381067 podStartE2EDuration="4.589381067s" podCreationTimestamp="2025-09-12 17:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:06.586858175 +0000 UTC m=+1.305665575" watchObservedRunningTime="2025-09-12 17:07:06.589381067 +0000 UTC m=+1.308188443" Sep 12 17:07:06.590200 kubelet[3645]: I0912 17:07:06.589623 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-146" podStartSLOduration=1.5896100990000002 podStartE2EDuration="1.589610099s" podCreationTimestamp="2025-09-12 17:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:06.570224807 +0000 UTC m=+1.289032183" watchObservedRunningTime="2025-09-12 17:07:06.589610099 +0000 UTC m=+1.308417475" Sep 12 17:07:06.623188 kubelet[3645]: I0912 17:07:06.622919 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-146" podStartSLOduration=1.622738895 podStartE2EDuration="1.622738895s" podCreationTimestamp="2025-09-12 17:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:06.621310139 +0000 UTC m=+1.340117527" watchObservedRunningTime="2025-09-12 17:07:06.622738895 +0000 UTC m=+1.341546343" Sep 12 17:07:09.607006 kubelet[3645]: I0912 17:07:09.606851 3645 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:07:09.609853 containerd[2017]: time="2025-09-12T17:07:09.609073934Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:07:09.611334 kubelet[3645]: I0912 17:07:09.610034 3645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:07:10.598862 systemd[1]: Created slice kubepods-besteffort-pod5c89b1b2_a37c_4572_94d1_a64f11e92a20.slice - libcontainer container kubepods-besteffort-pod5c89b1b2_a37c_4572_94d1_a64f11e92a20.slice. Sep 12 17:07:10.600404 kubelet[3645]: I0912 17:07:10.597752 3645 status_manager.go:895] "Failed to get status for pod" podUID="5c89b1b2-a37c-4572-94d1-a64f11e92a20" pod="kube-system/kube-proxy-q797w" err="pods \"kube-proxy-q797w\" is forbidden: User \"system:node:ip-172-31-16-146\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-146' and this object" Sep 12 17:07:10.602331 kubelet[3645]: E0912 17:07:10.602149 3645 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-16-146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-146' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Sep 12 17:07:10.602889 kubelet[3645]: E0912 17:07:10.602735 3645 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-16-146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-146' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Sep 12 17:07:10.629116 kubelet[3645]: I0912 17:07:10.628740 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c89b1b2-a37c-4572-94d1-a64f11e92a20-lib-modules\") pod \"kube-proxy-q797w\" (UID: \"5c89b1b2-a37c-4572-94d1-a64f11e92a20\") " pod="kube-system/kube-proxy-q797w" Sep 12 17:07:10.629116 kubelet[3645]: I0912 17:07:10.628853 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c89b1b2-a37c-4572-94d1-a64f11e92a20-xtables-lock\") pod \"kube-proxy-q797w\" (UID: \"5c89b1b2-a37c-4572-94d1-a64f11e92a20\") " pod="kube-system/kube-proxy-q797w" Sep 12 17:07:10.629116 kubelet[3645]: I0912 17:07:10.628918 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pkx\" (UniqueName: \"kubernetes.io/projected/5c89b1b2-a37c-4572-94d1-a64f11e92a20-kube-api-access-p2pkx\") pod \"kube-proxy-q797w\" (UID: \"5c89b1b2-a37c-4572-94d1-a64f11e92a20\") " pod="kube-system/kube-proxy-q797w" Sep 12 17:07:10.629116 kubelet[3645]: I0912 17:07:10.628968 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c89b1b2-a37c-4572-94d1-a64f11e92a20-kube-proxy\") pod \"kube-proxy-q797w\" (UID: \"5c89b1b2-a37c-4572-94d1-a64f11e92a20\") " pod="kube-system/kube-proxy-q797w" Sep 12 17:07:10.851600 systemd[1]: Created slice kubepods-besteffort-podd774cd34_7c17_4ff3_bc31_e4954fc91a2e.slice - libcontainer container kubepods-besteffort-podd774cd34_7c17_4ff3_bc31_e4954fc91a2e.slice. Sep 12 17:07:10.932819 kubelet[3645]: I0912 17:07:10.932689 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb72n\" (UniqueName: \"kubernetes.io/projected/d774cd34-7c17-4ff3-bc31-e4954fc91a2e-kube-api-access-kb72n\") pod \"tigera-operator-755d956888-9kfqf\" (UID: \"d774cd34-7c17-4ff3-bc31-e4954fc91a2e\") " pod="tigera-operator/tigera-operator-755d956888-9kfqf" Sep 12 17:07:10.933063 kubelet[3645]: I0912 17:07:10.933000 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d774cd34-7c17-4ff3-bc31-e4954fc91a2e-var-lib-calico\") pod \"tigera-operator-755d956888-9kfqf\" (UID: \"d774cd34-7c17-4ff3-bc31-e4954fc91a2e\") " pod="tigera-operator/tigera-operator-755d956888-9kfqf" Sep 12 17:07:11.162752 containerd[2017]: time="2025-09-12T17:07:11.162578126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9kfqf,Uid:d774cd34-7c17-4ff3-bc31-e4954fc91a2e,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:07:11.198920 containerd[2017]: time="2025-09-12T17:07:11.198306338Z" level=info msg="connecting to shim 95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06" address="unix:///run/containerd/s/3fad37ead4c3191f339fcebe555eab2fc2796aeb5f5e22c85f38254abfd322ab" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:07:11.258140 systemd[1]: Started cri-containerd-95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06.scope - libcontainer container 95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06. Sep 12 17:07:11.351129 containerd[2017]: time="2025-09-12T17:07:11.351046995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9kfqf,Uid:d774cd34-7c17-4ff3-bc31-e4954fc91a2e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06\"" Sep 12 17:07:11.354699 containerd[2017]: time="2025-09-12T17:07:11.354626295Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:07:11.754675 kubelet[3645]: E0912 17:07:11.754160 3645 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:07:11.754675 kubelet[3645]: E0912 17:07:11.754225 3645 projected.go:194] Error preparing data for projected volume kube-api-access-p2pkx for pod kube-system/kube-proxy-q797w: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:07:11.754675 kubelet[3645]: E0912 17:07:11.754344 3645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c89b1b2-a37c-4572-94d1-a64f11e92a20-kube-api-access-p2pkx podName:5c89b1b2-a37c-4572-94d1-a64f11e92a20 nodeName:}" failed. No retries permitted until 2025-09-12 17:07:12.254309733 +0000 UTC m=+6.973117097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p2pkx" (UniqueName: "kubernetes.io/projected/5c89b1b2-a37c-4572-94d1-a64f11e92a20-kube-api-access-p2pkx") pod "kube-proxy-q797w" (UID: "5c89b1b2-a37c-4572-94d1-a64f11e92a20") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:07:12.417082 containerd[2017]: time="2025-09-12T17:07:12.417005656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q797w,Uid:5c89b1b2-a37c-4572-94d1-a64f11e92a20,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:12.476584 containerd[2017]: time="2025-09-12T17:07:12.476467504Z" level=info msg="connecting to shim eadb8423dd300c7d6e3bafa4b6847649a4e6dc682694e6c7a6a54c52e3318a74" address="unix:///run/containerd/s/eda66e5af1f57e562d7efc51744cea8ef3c6c567f8c767c1e2f661dbd3d2e6c5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:07:12.573442 systemd[1]: Started cri-containerd-eadb8423dd300c7d6e3bafa4b6847649a4e6dc682694e6c7a6a54c52e3318a74.scope - libcontainer container eadb8423dd300c7d6e3bafa4b6847649a4e6dc682694e6c7a6a54c52e3318a74. Sep 12 17:07:12.675102 containerd[2017]: time="2025-09-12T17:07:12.674936813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q797w,Uid:5c89b1b2-a37c-4572-94d1-a64f11e92a20,Namespace:kube-system,Attempt:0,} returns sandbox id \"eadb8423dd300c7d6e3bafa4b6847649a4e6dc682694e6c7a6a54c52e3318a74\"" Sep 12 17:07:12.692705 containerd[2017]: time="2025-09-12T17:07:12.691817837Z" level=info msg="CreateContainer within sandbox \"eadb8423dd300c7d6e3bafa4b6847649a4e6dc682694e6c7a6a54c52e3318a74\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:07:12.747458 containerd[2017]: time="2025-09-12T17:07:12.747393738Z" level=info msg="Container 046072c73961b2f08097ddf35e1b29b428228cab019c603a6cd8ab131ccb1d34: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:07:12.773077 containerd[2017]: time="2025-09-12T17:07:12.773010342Z" level=info msg="CreateContainer within sandbox \"eadb8423dd300c7d6e3bafa4b6847649a4e6dc682694e6c7a6a54c52e3318a74\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"046072c73961b2f08097ddf35e1b29b428228cab019c603a6cd8ab131ccb1d34\"" Sep 12 17:07:12.776447 containerd[2017]: time="2025-09-12T17:07:12.776132682Z" level=info msg="StartContainer for \"046072c73961b2f08097ddf35e1b29b428228cab019c603a6cd8ab131ccb1d34\"" Sep 12 17:07:12.783715 containerd[2017]: time="2025-09-12T17:07:12.783657810Z" level=info msg="connecting to shim 046072c73961b2f08097ddf35e1b29b428228cab019c603a6cd8ab131ccb1d34" address="unix:///run/containerd/s/eda66e5af1f57e562d7efc51744cea8ef3c6c567f8c767c1e2f661dbd3d2e6c5" protocol=ttrpc version=3 Sep 12 17:07:12.841117 systemd[1]: Started cri-containerd-046072c73961b2f08097ddf35e1b29b428228cab019c603a6cd8ab131ccb1d34.scope - libcontainer container 046072c73961b2f08097ddf35e1b29b428228cab019c603a6cd8ab131ccb1d34. Sep 12 17:07:12.991809 containerd[2017]: time="2025-09-12T17:07:12.991613347Z" level=info msg="StartContainer for \"046072c73961b2f08097ddf35e1b29b428228cab019c603a6cd8ab131ccb1d34\" returns successfully" Sep 12 17:07:13.353412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3485126210.mount: Deactivated successfully. Sep 12 17:07:13.744965 kubelet[3645]: I0912 17:07:13.743614 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q797w" podStartSLOduration=3.743586379 podStartE2EDuration="3.743586379s" podCreationTimestamp="2025-09-12 17:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:13.739395043 +0000 UTC m=+8.458202431" watchObservedRunningTime="2025-09-12 17:07:13.743586379 +0000 UTC m=+8.462393827" Sep 12 17:07:14.262648 containerd[2017]: time="2025-09-12T17:07:14.262001597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:14.264555 containerd[2017]: time="2025-09-12T17:07:14.264499433Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:07:14.266066 containerd[2017]: time="2025-09-12T17:07:14.265998857Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:14.272210 containerd[2017]: time="2025-09-12T17:07:14.272149109Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:14.273730 containerd[2017]: time="2025-09-12T17:07:14.273667661Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.91747353s" Sep 12 17:07:14.273990 containerd[2017]: time="2025-09-12T17:07:14.273956501Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:07:14.282208 containerd[2017]: time="2025-09-12T17:07:14.282049193Z" level=info msg="CreateContainer within sandbox \"95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:07:14.295186 containerd[2017]: time="2025-09-12T17:07:14.295128845Z" level=info msg="Container 9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:07:14.302274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835850009.mount: Deactivated successfully. Sep 12 17:07:14.311412 containerd[2017]: time="2025-09-12T17:07:14.311212373Z" level=info msg="CreateContainer within sandbox \"95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40\"" Sep 12 17:07:14.312889 containerd[2017]: time="2025-09-12T17:07:14.312708665Z" level=info msg="StartContainer for \"9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40\"" Sep 12 17:07:14.316269 containerd[2017]: time="2025-09-12T17:07:14.316170318Z" level=info msg="connecting to shim 9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40" address="unix:///run/containerd/s/3fad37ead4c3191f339fcebe555eab2fc2796aeb5f5e22c85f38254abfd322ab" protocol=ttrpc version=3 Sep 12 17:07:14.364115 systemd[1]: Started cri-containerd-9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40.scope - libcontainer container 9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40. Sep 12 17:07:14.440858 containerd[2017]: time="2025-09-12T17:07:14.440622258Z" level=info msg="StartContainer for \"9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40\" returns successfully" Sep 12 17:07:14.733370 kubelet[3645]: I0912 17:07:14.733024 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-9kfqf" podStartSLOduration=1.8111391060000002 podStartE2EDuration="4.732999368s" podCreationTimestamp="2025-09-12 17:07:10 +0000 UTC" firstStartedPulling="2025-09-12 17:07:11.353987511 +0000 UTC m=+6.072794875" lastFinishedPulling="2025-09-12 17:07:14.275847761 +0000 UTC m=+8.994655137" observedRunningTime="2025-09-12 17:07:14.732787916 +0000 UTC m=+9.451595316" watchObservedRunningTime="2025-09-12 17:07:14.732999368 +0000 UTC m=+9.451806744" Sep 12 17:07:21.945235 sudo[2399]: pam_unix(sudo:session): session closed for user root Sep 12 17:07:21.969016 sshd[2397]: Connection closed by 139.178.68.195 port 57876 Sep 12 17:07:21.971336 sshd-session[2383]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:21.991162 systemd[1]: sshd@8-172.31.16.146:22-139.178.68.195:57876.service: Deactivated successfully. Sep 12 17:07:22.002656 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:07:22.003099 systemd[1]: session-9.scope: Consumed 15.499s CPU time, 226.8M memory peak. Sep 12 17:07:22.021392 systemd-logind[1992]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:07:22.025954 systemd-logind[1992]: Removed session 9. Sep 12 17:07:36.648667 systemd[1]: Created slice kubepods-besteffort-pod446130d0_3672_4a66_833e_3dfd7a8b7e9b.slice - libcontainer container kubepods-besteffort-pod446130d0_3672_4a66_833e_3dfd7a8b7e9b.slice. Sep 12 17:07:36.658684 kubelet[3645]: I0912 17:07:36.658380 3645 status_manager.go:895] "Failed to get status for pod" podUID="446130d0-3672-4a66-833e-3dfd7a8b7e9b" pod="calico-system/calico-typha-676dc9885d-cdcp8" err="pods \"calico-typha-676dc9885d-cdcp8\" is forbidden: User \"system:node:ip-172-31-16-146\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-16-146' and this object" Sep 12 17:07:36.660332 kubelet[3645]: E0912 17:07:36.659575 3645 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-16-146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-16-146' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Sep 12 17:07:36.660332 kubelet[3645]: E0912 17:07:36.659794 3645 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-16-146\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-16-146' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Sep 12 17:07:36.728191 kubelet[3645]: I0912 17:07:36.728121 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/446130d0-3672-4a66-833e-3dfd7a8b7e9b-tigera-ca-bundle\") pod \"calico-typha-676dc9885d-cdcp8\" (UID: \"446130d0-3672-4a66-833e-3dfd7a8b7e9b\") " pod="calico-system/calico-typha-676dc9885d-cdcp8" Sep 12 17:07:36.728370 kubelet[3645]: I0912 17:07:36.728248 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/446130d0-3672-4a66-833e-3dfd7a8b7e9b-typha-certs\") pod \"calico-typha-676dc9885d-cdcp8\" (UID: \"446130d0-3672-4a66-833e-3dfd7a8b7e9b\") " pod="calico-system/calico-typha-676dc9885d-cdcp8" Sep 12 17:07:36.728851 kubelet[3645]: I0912 17:07:36.728443 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5srw\" (UniqueName: \"kubernetes.io/projected/446130d0-3672-4a66-833e-3dfd7a8b7e9b-kube-api-access-f5srw\") pod \"calico-typha-676dc9885d-cdcp8\" (UID: \"446130d0-3672-4a66-833e-3dfd7a8b7e9b\") " pod="calico-system/calico-typha-676dc9885d-cdcp8" Sep 12 17:07:36.991252 systemd[1]: Created slice kubepods-besteffort-pod4cd9aff3_f79d_4ba7_8134_24647617ba39.slice - libcontainer container kubepods-besteffort-pod4cd9aff3_f79d_4ba7_8134_24647617ba39.slice. Sep 12 17:07:37.030914 kubelet[3645]: I0912 17:07:37.030848 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-cni-bin-dir\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031197 kubelet[3645]: I0912 17:07:37.030933 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-lib-modules\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031197 kubelet[3645]: I0912 17:07:37.030975 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-var-lib-calico\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031197 kubelet[3645]: I0912 17:07:37.031017 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-policysync\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031197 kubelet[3645]: I0912 17:07:37.031080 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-var-run-calico\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031197 kubelet[3645]: I0912 17:07:37.031126 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4cd9aff3-f79d-4ba7-8134-24647617ba39-node-certs\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031466 kubelet[3645]: I0912 17:07:37.031164 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47gv\" (UniqueName: \"kubernetes.io/projected/4cd9aff3-f79d-4ba7-8134-24647617ba39-kube-api-access-w47gv\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031466 kubelet[3645]: I0912 17:07:37.031213 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-cni-net-dir\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031466 kubelet[3645]: I0912 17:07:37.031269 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-cni-log-dir\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031466 kubelet[3645]: I0912 17:07:37.031306 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-flexvol-driver-host\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.031466 kubelet[3645]: I0912 17:07:37.031340 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cd9aff3-f79d-4ba7-8134-24647617ba39-xtables-lock\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.032249 kubelet[3645]: I0912 17:07:37.031375 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd9aff3-f79d-4ba7-8134-24647617ba39-tigera-ca-bundle\") pod \"calico-node-jw8fk\" (UID: \"4cd9aff3-f79d-4ba7-8134-24647617ba39\") " pod="calico-system/calico-node-jw8fk" Sep 12 17:07:37.138663 kubelet[3645]: E0912 17:07:37.138622 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.138984 kubelet[3645]: W0912 17:07:37.138859 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.138984 kubelet[3645]: E0912 17:07:37.138921 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.158175 kubelet[3645]: E0912 17:07:37.158096 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.158175 kubelet[3645]: W0912 17:07:37.158163 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.158414 kubelet[3645]: E0912 17:07:37.158225 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.308811 kubelet[3645]: E0912 17:07:37.308656 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-drgp6" podUID="3451646a-5365-4f0c-8470-08bd6eac7042" Sep 12 17:07:37.387955 kubelet[3645]: E0912 17:07:37.387885 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.387955 kubelet[3645]: W0912 17:07:37.387938 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.388217 kubelet[3645]: E0912 17:07:37.387978 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.390439 kubelet[3645]: E0912 17:07:37.390371 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.390612 kubelet[3645]: W0912 17:07:37.390421 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.390612 kubelet[3645]: E0912 17:07:37.390505 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.391108 kubelet[3645]: E0912 17:07:37.391047 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.391108 kubelet[3645]: W0912 17:07:37.391094 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.391308 kubelet[3645]: E0912 17:07:37.391130 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.391968 kubelet[3645]: E0912 17:07:37.391895 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.391968 kubelet[3645]: W0912 17:07:37.391941 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.392176 kubelet[3645]: E0912 17:07:37.391977 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.394169 kubelet[3645]: E0912 17:07:37.394100 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.394169 kubelet[3645]: W0912 17:07:37.394151 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.394395 kubelet[3645]: E0912 17:07:37.394187 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.395816 kubelet[3645]: E0912 17:07:37.395709 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.396110 kubelet[3645]: W0912 17:07:37.395756 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.396219 kubelet[3645]: E0912 17:07:37.396115 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.399096 kubelet[3645]: E0912 17:07:37.398960 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.399096 kubelet[3645]: W0912 17:07:37.399075 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.399322 kubelet[3645]: E0912 17:07:37.399123 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.399872 kubelet[3645]: E0912 17:07:37.399812 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.399872 kubelet[3645]: W0912 17:07:37.399858 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.400077 kubelet[3645]: E0912 17:07:37.399895 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.400687 kubelet[3645]: E0912 17:07:37.400616 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.400687 kubelet[3645]: W0912 17:07:37.400664 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.400957 kubelet[3645]: E0912 17:07:37.400700 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.401408 kubelet[3645]: E0912 17:07:37.401352 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.401408 kubelet[3645]: W0912 17:07:37.401401 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.401598 kubelet[3645]: E0912 17:07:37.401436 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.403631 kubelet[3645]: E0912 17:07:37.403567 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.403631 kubelet[3645]: W0912 17:07:37.403613 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.404251 kubelet[3645]: E0912 17:07:37.403652 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.405693 kubelet[3645]: E0912 17:07:37.405633 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.405693 kubelet[3645]: W0912 17:07:37.405677 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.405992 kubelet[3645]: E0912 17:07:37.405720 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.409090 kubelet[3645]: E0912 17:07:37.409024 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.409090 kubelet[3645]: W0912 17:07:37.409073 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.409329 kubelet[3645]: E0912 17:07:37.409111 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.410178 kubelet[3645]: E0912 17:07:37.410096 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.410178 kubelet[3645]: W0912 17:07:37.410147 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.410178 kubelet[3645]: E0912 17:07:37.410183 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.412656 kubelet[3645]: E0912 17:07:37.412580 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.412656 kubelet[3645]: W0912 17:07:37.412630 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.412963 kubelet[3645]: E0912 17:07:37.412669 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.413219 kubelet[3645]: E0912 17:07:37.413160 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.413219 kubelet[3645]: W0912 17:07:37.413206 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.413384 kubelet[3645]: E0912 17:07:37.413242 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.415058 kubelet[3645]: E0912 17:07:37.414992 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.415058 kubelet[3645]: W0912 17:07:37.415039 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.415289 kubelet[3645]: E0912 17:07:37.415076 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.416809 kubelet[3645]: E0912 17:07:37.416167 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.416809 kubelet[3645]: W0912 17:07:37.416244 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.416809 kubelet[3645]: E0912 17:07:37.416317 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.418818 kubelet[3645]: E0912 17:07:37.417713 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.418818 kubelet[3645]: W0912 17:07:37.417761 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.418818 kubelet[3645]: E0912 17:07:37.417823 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.418818 kubelet[3645]: E0912 17:07:37.418280 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.418818 kubelet[3645]: W0912 17:07:37.418308 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.418818 kubelet[3645]: E0912 17:07:37.418337 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.436535 kubelet[3645]: E0912 17:07:37.436470 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.436535 kubelet[3645]: W0912 17:07:37.436522 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.436836 kubelet[3645]: E0912 17:07:37.436560 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.436836 kubelet[3645]: I0912 17:07:37.436639 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3451646a-5365-4f0c-8470-08bd6eac7042-registration-dir\") pod \"csi-node-driver-drgp6\" (UID: \"3451646a-5365-4f0c-8470-08bd6eac7042\") " pod="calico-system/csi-node-driver-drgp6" Sep 12 17:07:37.439476 kubelet[3645]: E0912 17:07:37.439423 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.439476 kubelet[3645]: W0912 17:07:37.439468 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.439655 kubelet[3645]: E0912 17:07:37.439503 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.440152 kubelet[3645]: E0912 17:07:37.440101 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.440320 kubelet[3645]: W0912 17:07:37.440143 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.440320 kubelet[3645]: E0912 17:07:37.440196 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.442092 kubelet[3645]: E0912 17:07:37.442023 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.442092 kubelet[3645]: W0912 17:07:37.442073 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.442338 kubelet[3645]: E0912 17:07:37.442108 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.442338 kubelet[3645]: I0912 17:07:37.442174 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3451646a-5365-4f0c-8470-08bd6eac7042-socket-dir\") pod \"csi-node-driver-drgp6\" (UID: \"3451646a-5365-4f0c-8470-08bd6eac7042\") " pod="calico-system/csi-node-driver-drgp6" Sep 12 17:07:37.442935 kubelet[3645]: E0912 17:07:37.442883 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.442935 kubelet[3645]: W0912 17:07:37.442930 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.443159 kubelet[3645]: E0912 17:07:37.442966 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.445549 kubelet[3645]: I0912 17:07:37.445431 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgk8j\" (UniqueName: \"kubernetes.io/projected/3451646a-5365-4f0c-8470-08bd6eac7042-kube-api-access-mgk8j\") pod \"csi-node-driver-drgp6\" (UID: \"3451646a-5365-4f0c-8470-08bd6eac7042\") " pod="calico-system/csi-node-driver-drgp6" Sep 12 17:07:37.447994 kubelet[3645]: E0912 17:07:37.447876 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.447994 kubelet[3645]: W0912 17:07:37.447962 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.448265 kubelet[3645]: E0912 17:07:37.447999 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.448265 kubelet[3645]: I0912 17:07:37.448151 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3451646a-5365-4f0c-8470-08bd6eac7042-varrun\") pod \"csi-node-driver-drgp6\" (UID: \"3451646a-5365-4f0c-8470-08bd6eac7042\") " pod="calico-system/csi-node-driver-drgp6" Sep 12 17:07:37.449904 kubelet[3645]: E0912 17:07:37.449635 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.449904 kubelet[3645]: W0912 17:07:37.449898 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.450125 kubelet[3645]: E0912 17:07:37.450065 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.451392 kubelet[3645]: I0912 17:07:37.451198 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3451646a-5365-4f0c-8470-08bd6eac7042-kubelet-dir\") pod \"csi-node-driver-drgp6\" (UID: \"3451646a-5365-4f0c-8470-08bd6eac7042\") " pod="calico-system/csi-node-driver-drgp6" Sep 12 17:07:37.451573 kubelet[3645]: E0912 17:07:37.451425 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.451573 kubelet[3645]: W0912 17:07:37.451448 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.451573 kubelet[3645]: E0912 17:07:37.451483 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.452575 kubelet[3645]: E0912 17:07:37.452307 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.452575 kubelet[3645]: W0912 17:07:37.452352 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.452575 kubelet[3645]: E0912 17:07:37.452438 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.453180 kubelet[3645]: E0912 17:07:37.453111 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.453180 kubelet[3645]: W0912 17:07:37.453154 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.453352 kubelet[3645]: E0912 17:07:37.453188 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.453953 kubelet[3645]: E0912 17:07:37.453640 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.453953 kubelet[3645]: W0912 17:07:37.453666 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.453953 kubelet[3645]: E0912 17:07:37.453699 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.454258 kubelet[3645]: E0912 17:07:37.454207 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.454258 kubelet[3645]: W0912 17:07:37.454247 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.455029 kubelet[3645]: E0912 17:07:37.454278 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.455029 kubelet[3645]: E0912 17:07:37.454691 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.455029 kubelet[3645]: W0912 17:07:37.454718 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.455029 kubelet[3645]: E0912 17:07:37.454748 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.455305 kubelet[3645]: E0912 17:07:37.455200 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.455305 kubelet[3645]: W0912 17:07:37.455227 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.455305 kubelet[3645]: E0912 17:07:37.455256 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.455867 kubelet[3645]: E0912 17:07:37.455729 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.455867 kubelet[3645]: W0912 17:07:37.455756 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.455867 kubelet[3645]: E0912 17:07:37.455831 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.552614 kubelet[3645]: E0912 17:07:37.552575 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.553323 kubelet[3645]: W0912 17:07:37.552861 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.553323 kubelet[3645]: E0912 17:07:37.552919 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.554245 kubelet[3645]: E0912 17:07:37.554003 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.554245 kubelet[3645]: W0912 17:07:37.554038 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.554245 kubelet[3645]: E0912 17:07:37.554073 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.555231 kubelet[3645]: E0912 17:07:37.555190 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.555579 kubelet[3645]: W0912 17:07:37.555490 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.555579 kubelet[3645]: E0912 17:07:37.555539 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.556530 kubelet[3645]: E0912 17:07:37.556423 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.556530 kubelet[3645]: W0912 17:07:37.556462 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.556530 kubelet[3645]: E0912 17:07:37.556494 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.559412 kubelet[3645]: E0912 17:07:37.559283 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.559904 kubelet[3645]: W0912 17:07:37.559535 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.559904 kubelet[3645]: E0912 17:07:37.559575 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.560466 kubelet[3645]: E0912 17:07:37.560348 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.560466 kubelet[3645]: W0912 17:07:37.560384 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.560466 kubelet[3645]: E0912 17:07:37.560414 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.563140 kubelet[3645]: E0912 17:07:37.563077 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.563406 kubelet[3645]: W0912 17:07:37.563297 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.563790 kubelet[3645]: E0912 17:07:37.563489 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.565263 kubelet[3645]: E0912 17:07:37.565176 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.565263 kubelet[3645]: W0912 17:07:37.565248 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.565493 kubelet[3645]: E0912 17:07:37.565308 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.567600 kubelet[3645]: E0912 17:07:37.567548 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.567600 kubelet[3645]: W0912 17:07:37.567590 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.568001 kubelet[3645]: E0912 17:07:37.567625 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.568497 kubelet[3645]: E0912 17:07:37.568378 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.568497 kubelet[3645]: W0912 17:07:37.568416 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.568497 kubelet[3645]: E0912 17:07:37.568475 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.569938 kubelet[3645]: E0912 17:07:37.569858 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.569938 kubelet[3645]: W0912 17:07:37.569926 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.570637 kubelet[3645]: E0912 17:07:37.569960 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.573895 kubelet[3645]: E0912 17:07:37.572837 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.574087 kubelet[3645]: W0912 17:07:37.573894 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.574087 kubelet[3645]: E0912 17:07:37.573962 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.574567 kubelet[3645]: E0912 17:07:37.574523 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.574567 kubelet[3645]: W0912 17:07:37.574559 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.574789 kubelet[3645]: E0912 17:07:37.574591 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.577027 kubelet[3645]: E0912 17:07:37.576954 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.577027 kubelet[3645]: W0912 17:07:37.577027 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.577318 kubelet[3645]: E0912 17:07:37.577063 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.577701 kubelet[3645]: E0912 17:07:37.577654 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.577825 kubelet[3645]: W0912 17:07:37.577713 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.577825 kubelet[3645]: E0912 17:07:37.577749 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.579349 kubelet[3645]: E0912 17:07:37.579291 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.579539 kubelet[3645]: W0912 17:07:37.579340 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.579539 kubelet[3645]: E0912 17:07:37.579400 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.581240 kubelet[3645]: E0912 17:07:37.581185 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.581374 kubelet[3645]: W0912 17:07:37.581226 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.581374 kubelet[3645]: E0912 17:07:37.581286 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.584386 kubelet[3645]: E0912 17:07:37.584330 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.584386 kubelet[3645]: W0912 17:07:37.584373 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.584637 kubelet[3645]: E0912 17:07:37.584410 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.585073 kubelet[3645]: E0912 17:07:37.585020 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.585210 kubelet[3645]: W0912 17:07:37.585059 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.585210 kubelet[3645]: E0912 17:07:37.585121 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.591069 kubelet[3645]: E0912 17:07:37.591012 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.591069 kubelet[3645]: W0912 17:07:37.591055 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.591292 kubelet[3645]: E0912 17:07:37.591091 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.593879 kubelet[3645]: E0912 17:07:37.593819 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.594065 kubelet[3645]: W0912 17:07:37.593886 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.594065 kubelet[3645]: E0912 17:07:37.593925 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.594675 kubelet[3645]: E0912 17:07:37.594603 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.594675 kubelet[3645]: W0912 17:07:37.594671 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.595030 kubelet[3645]: E0912 17:07:37.594729 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.596811 kubelet[3645]: E0912 17:07:37.595442 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.596811 kubelet[3645]: W0912 17:07:37.595691 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.596811 kubelet[3645]: E0912 17:07:37.595989 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.598696 kubelet[3645]: E0912 17:07:37.598491 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.598696 kubelet[3645]: W0912 17:07:37.598536 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.598696 kubelet[3645]: E0912 17:07:37.598570 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.601807 kubelet[3645]: E0912 17:07:37.600045 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.601807 kubelet[3645]: W0912 17:07:37.600096 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.601807 kubelet[3645]: E0912 17:07:37.600131 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.833823 kubelet[3645]: E0912 17:07:37.832217 3645 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 17:07:37.833823 kubelet[3645]: E0912 17:07:37.832379 3645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/446130d0-3672-4a66-833e-3dfd7a8b7e9b-typha-certs podName:446130d0-3672-4a66-833e-3dfd7a8b7e9b nodeName:}" failed. No retries permitted until 2025-09-12 17:07:38.332346646 +0000 UTC m=+33.051154022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/446130d0-3672-4a66-833e-3dfd7a8b7e9b-typha-certs") pod "calico-typha-676dc9885d-cdcp8" (UID: "446130d0-3672-4a66-833e-3dfd7a8b7e9b") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:07:37.861381 kubelet[3645]: E0912 17:07:37.860883 3645 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:07:37.861381 kubelet[3645]: E0912 17:07:37.860945 3645 projected.go:194] Error preparing data for projected volume kube-api-access-f5srw for pod calico-system/calico-typha-676dc9885d-cdcp8: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:07:37.861381 kubelet[3645]: E0912 17:07:37.861045 3645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/446130d0-3672-4a66-833e-3dfd7a8b7e9b-kube-api-access-f5srw podName:446130d0-3672-4a66-833e-3dfd7a8b7e9b nodeName:}" failed. No retries permitted until 2025-09-12 17:07:38.361015738 +0000 UTC m=+33.079823114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f5srw" (UniqueName: "kubernetes.io/projected/446130d0-3672-4a66-833e-3dfd7a8b7e9b-kube-api-access-f5srw") pod "calico-typha-676dc9885d-cdcp8" (UID: "446130d0-3672-4a66-833e-3dfd7a8b7e9b") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:07:37.872224 kubelet[3645]: E0912 17:07:37.872161 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.872369 kubelet[3645]: W0912 17:07:37.872240 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.872369 kubelet[3645]: E0912 17:07:37.872282 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.873684 kubelet[3645]: E0912 17:07:37.873627 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.873684 kubelet[3645]: W0912 17:07:37.873672 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.873971 kubelet[3645]: E0912 17:07:37.873709 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.949608 kubelet[3645]: E0912 17:07:37.949568 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.950092 kubelet[3645]: W0912 17:07:37.949893 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.950092 kubelet[3645]: E0912 17:07:37.949944 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.979943 kubelet[3645]: E0912 17:07:37.978986 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.979943 kubelet[3645]: W0912 17:07:37.979050 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.979943 kubelet[3645]: E0912 17:07:37.979085 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.981472 kubelet[3645]: E0912 17:07:37.981388 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.982038 kubelet[3645]: W0912 17:07:37.981856 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.982835 kubelet[3645]: E0912 17:07:37.982610 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:37.986264 kubelet[3645]: E0912 17:07:37.985919 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:37.986264 kubelet[3645]: W0912 17:07:37.986095 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:37.986931 kubelet[3645]: E0912 17:07:37.986137 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.089828 kubelet[3645]: E0912 17:07:38.088609 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.089828 kubelet[3645]: W0912 17:07:38.088651 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.089828 kubelet[3645]: E0912 17:07:38.088686 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.091147 kubelet[3645]: E0912 17:07:38.090146 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.091147 kubelet[3645]: W0912 17:07:38.090178 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.091147 kubelet[3645]: E0912 17:07:38.090211 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.192347 kubelet[3645]: E0912 17:07:38.192292 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.192347 kubelet[3645]: W0912 17:07:38.192336 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.193086 kubelet[3645]: E0912 17:07:38.192374 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.193086 kubelet[3645]: E0912 17:07:38.192935 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.193086 kubelet[3645]: W0912 17:07:38.192962 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.193086 kubelet[3645]: E0912 17:07:38.192991 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.202277 containerd[2017]: time="2025-09-12T17:07:38.202212616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jw8fk,Uid:4cd9aff3-f79d-4ba7-8134-24647617ba39,Namespace:calico-system,Attempt:0,}" Sep 12 17:07:38.260066 containerd[2017]: time="2025-09-12T17:07:38.260000632Z" level=info msg="connecting to shim f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87" address="unix:///run/containerd/s/60e0ea2f5426da350b6a2d8b0939c651dcbfd735f9551127064146b1932efb24" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:07:38.296221 kubelet[3645]: E0912 17:07:38.295663 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.298105 kubelet[3645]: W0912 17:07:38.296068 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.298105 kubelet[3645]: E0912 17:07:38.296642 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.301199 kubelet[3645]: E0912 17:07:38.301036 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.302002 kubelet[3645]: W0912 17:07:38.301885 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.302002 kubelet[3645]: E0912 17:07:38.301989 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.350609 systemd[1]: Started cri-containerd-f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87.scope - libcontainer container f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87. Sep 12 17:07:38.403797 kubelet[3645]: E0912 17:07:38.403739 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.404598 kubelet[3645]: W0912 17:07:38.403857 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.404598 kubelet[3645]: E0912 17:07:38.403892 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.404933 kubelet[3645]: E0912 17:07:38.404897 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.405266 kubelet[3645]: W0912 17:07:38.405100 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.405266 kubelet[3645]: E0912 17:07:38.405145 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.406661 kubelet[3645]: E0912 17:07:38.406608 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.406842 kubelet[3645]: W0912 17:07:38.406675 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.406842 kubelet[3645]: E0912 17:07:38.406712 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.407472 kubelet[3645]: E0912 17:07:38.407318 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.407472 kubelet[3645]: W0912 17:07:38.407470 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.407648 kubelet[3645]: E0912 17:07:38.407507 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.408937 kubelet[3645]: E0912 17:07:38.408880 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.408937 kubelet[3645]: W0912 17:07:38.408931 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.409130 kubelet[3645]: E0912 17:07:38.408966 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.410644 kubelet[3645]: E0912 17:07:38.410586 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.410644 kubelet[3645]: W0912 17:07:38.410631 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.410880 kubelet[3645]: E0912 17:07:38.410666 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.412578 kubelet[3645]: E0912 17:07:38.412201 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.412578 kubelet[3645]: W0912 17:07:38.412344 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.412578 kubelet[3645]: E0912 17:07:38.412381 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.413476 kubelet[3645]: E0912 17:07:38.413428 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.413476 kubelet[3645]: W0912 17:07:38.413470 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.413689 kubelet[3645]: E0912 17:07:38.413504 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.417262 kubelet[3645]: E0912 17:07:38.415954 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.417262 kubelet[3645]: W0912 17:07:38.416004 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.417262 kubelet[3645]: E0912 17:07:38.416040 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.417547 kubelet[3645]: E0912 17:07:38.417402 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.417547 kubelet[3645]: W0912 17:07:38.417432 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.417646 kubelet[3645]: E0912 17:07:38.417602 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.479588 containerd[2017]: time="2025-09-12T17:07:38.479468262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jw8fk,Uid:4cd9aff3-f79d-4ba7-8134-24647617ba39,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87\"" Sep 12 17:07:38.480015 kubelet[3645]: E0912 17:07:38.479086 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.483360 kubelet[3645]: W0912 17:07:38.480019 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.483360 kubelet[3645]: E0912 17:07:38.483241 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.489908 kubelet[3645]: E0912 17:07:38.489846 3645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:07:38.490584 kubelet[3645]: W0912 17:07:38.490529 3645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:07:38.491709 kubelet[3645]: E0912 17:07:38.491650 3645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:07:38.496784 containerd[2017]: time="2025-09-12T17:07:38.496707702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:07:38.757853 containerd[2017]: time="2025-09-12T17:07:38.757299979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-676dc9885d-cdcp8,Uid:446130d0-3672-4a66-833e-3dfd7a8b7e9b,Namespace:calico-system,Attempt:0,}" Sep 12 17:07:38.791852 containerd[2017]: time="2025-09-12T17:07:38.791701735Z" level=info msg="connecting to shim 06cb046653ec4d5dee1da224d3cf19b2041031d459e09a069868edccd6a328a2" address="unix:///run/containerd/s/ee29ddcb23364ca109e311a169a72ab54002a5e81de4fb320341c08d26f8d2b1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:07:38.842092 systemd[1]: Started cri-containerd-06cb046653ec4d5dee1da224d3cf19b2041031d459e09a069868edccd6a328a2.scope - libcontainer container 06cb046653ec4d5dee1da224d3cf19b2041031d459e09a069868edccd6a328a2. Sep 12 17:07:38.932133 containerd[2017]: time="2025-09-12T17:07:38.931991816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-676dc9885d-cdcp8,Uid:446130d0-3672-4a66-833e-3dfd7a8b7e9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"06cb046653ec4d5dee1da224d3cf19b2041031d459e09a069868edccd6a328a2\"" Sep 12 17:07:39.588801 kubelet[3645]: E0912 17:07:39.588093 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-drgp6" podUID="3451646a-5365-4f0c-8470-08bd6eac7042" Sep 12 17:07:39.794591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601765329.mount: Deactivated successfully. Sep 12 17:07:39.959300 containerd[2017]: time="2025-09-12T17:07:39.958956753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:39.960902 containerd[2017]: time="2025-09-12T17:07:39.960836085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 12 17:07:39.962051 containerd[2017]: time="2025-09-12T17:07:39.961987989Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:39.966530 containerd[2017]: time="2025-09-12T17:07:39.966409209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:39.967707 containerd[2017]: time="2025-09-12T17:07:39.967581237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.470636955s" Sep 12 17:07:39.967707 containerd[2017]: time="2025-09-12T17:07:39.967641021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:07:39.970420 containerd[2017]: time="2025-09-12T17:07:39.969928977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:07:39.977364 containerd[2017]: time="2025-09-12T17:07:39.977315217Z" level=info msg="CreateContainer within sandbox \"f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:07:39.993178 containerd[2017]: time="2025-09-12T17:07:39.993101445Z" level=info msg="Container 47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:07:40.013345 containerd[2017]: time="2025-09-12T17:07:40.013170437Z" level=info msg="CreateContainer within sandbox \"f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2\"" Sep 12 17:07:40.014750 containerd[2017]: time="2025-09-12T17:07:40.014682473Z" level=info msg="StartContainer for \"47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2\"" Sep 12 17:07:40.018708 containerd[2017]: time="2025-09-12T17:07:40.018561437Z" level=info msg="connecting to shim 47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2" address="unix:///run/containerd/s/60e0ea2f5426da350b6a2d8b0939c651dcbfd735f9551127064146b1932efb24" protocol=ttrpc version=3 Sep 12 17:07:40.056127 systemd[1]: Started cri-containerd-47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2.scope - libcontainer container 47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2. Sep 12 17:07:40.142591 containerd[2017]: time="2025-09-12T17:07:40.142357002Z" level=info msg="StartContainer for \"47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2\" returns successfully" Sep 12 17:07:40.177382 systemd[1]: cri-containerd-47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2.scope: Deactivated successfully. Sep 12 17:07:40.186794 containerd[2017]: time="2025-09-12T17:07:40.186710862Z" level=info msg="received exit event container_id:\"47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2\" id:\"47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2\" pid:4266 exited_at:{seconds:1757696860 nanos:186131766}" Sep 12 17:07:40.187246 containerd[2017]: time="2025-09-12T17:07:40.187179654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2\" id:\"47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2\" pid:4266 exited_at:{seconds:1757696860 nanos:186131766}" Sep 12 17:07:40.245011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c4170f3d400dbe2a76ede37e49a102867b5ae0fd3849ef592ca6de939ea6c2-rootfs.mount: Deactivated successfully. Sep 12 17:07:41.594462 kubelet[3645]: E0912 17:07:41.594160 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-drgp6" podUID="3451646a-5365-4f0c-8470-08bd6eac7042" Sep 12 17:07:42.127701 containerd[2017]: time="2025-09-12T17:07:42.127622840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:42.129639 containerd[2017]: time="2025-09-12T17:07:42.129294896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 12 17:07:42.130595 containerd[2017]: time="2025-09-12T17:07:42.130536716Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:42.134254 containerd[2017]: time="2025-09-12T17:07:42.134181140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:42.135673 containerd[2017]: time="2025-09-12T17:07:42.135613184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.165622215s" Sep 12 17:07:42.135673 containerd[2017]: time="2025-09-12T17:07:42.135669176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:07:42.138228 containerd[2017]: time="2025-09-12T17:07:42.138154760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:07:42.170756 containerd[2017]: time="2025-09-12T17:07:42.170711792Z" level=info msg="CreateContainer within sandbox \"06cb046653ec4d5dee1da224d3cf19b2041031d459e09a069868edccd6a328a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:07:42.184472 containerd[2017]: time="2025-09-12T17:07:42.182075144Z" level=info msg="Container cfbb0fc2069912b44bcb3b3ea36f3c0c0fa55a0af08e32659d02d7e111ee34f6: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:07:42.198008 containerd[2017]: time="2025-09-12T17:07:42.197930732Z" level=info msg="CreateContainer within sandbox \"06cb046653ec4d5dee1da224d3cf19b2041031d459e09a069868edccd6a328a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cfbb0fc2069912b44bcb3b3ea36f3c0c0fa55a0af08e32659d02d7e111ee34f6\"" Sep 12 17:07:42.199204 containerd[2017]: time="2025-09-12T17:07:42.199031168Z" level=info msg="StartContainer for \"cfbb0fc2069912b44bcb3b3ea36f3c0c0fa55a0af08e32659d02d7e111ee34f6\"" Sep 12 17:07:42.201613 containerd[2017]: time="2025-09-12T17:07:42.201520844Z" level=info msg="connecting to shim cfbb0fc2069912b44bcb3b3ea36f3c0c0fa55a0af08e32659d02d7e111ee34f6" address="unix:///run/containerd/s/ee29ddcb23364ca109e311a169a72ab54002a5e81de4fb320341c08d26f8d2b1" protocol=ttrpc version=3 Sep 12 17:07:42.242101 systemd[1]: Started cri-containerd-cfbb0fc2069912b44bcb3b3ea36f3c0c0fa55a0af08e32659d02d7e111ee34f6.scope - libcontainer container cfbb0fc2069912b44bcb3b3ea36f3c0c0fa55a0af08e32659d02d7e111ee34f6. Sep 12 17:07:42.324979 containerd[2017]: time="2025-09-12T17:07:42.324921009Z" level=info msg="StartContainer for \"cfbb0fc2069912b44bcb3b3ea36f3c0c0fa55a0af08e32659d02d7e111ee34f6\" returns successfully" Sep 12 17:07:42.906326 kubelet[3645]: I0912 17:07:42.906109 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-676dc9885d-cdcp8" podStartSLOduration=3.70374476 podStartE2EDuration="6.90606306s" podCreationTimestamp="2025-09-12 17:07:36 +0000 UTC" firstStartedPulling="2025-09-12 17:07:38.93480488 +0000 UTC m=+33.653612244" lastFinishedPulling="2025-09-12 17:07:42.13712318 +0000 UTC m=+36.855930544" observedRunningTime="2025-09-12 17:07:42.885932147 +0000 UTC m=+37.604739535" watchObservedRunningTime="2025-09-12 17:07:42.90606306 +0000 UTC m=+37.624870436" Sep 12 17:07:43.588357 kubelet[3645]: E0912 17:07:43.588154 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-drgp6" podUID="3451646a-5365-4f0c-8470-08bd6eac7042" Sep 12 17:07:45.337810 containerd[2017]: time="2025-09-12T17:07:45.337512816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:45.339055 containerd[2017]: time="2025-09-12T17:07:45.338988396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:07:45.342806 containerd[2017]: time="2025-09-12T17:07:45.342676800Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:45.349712 containerd[2017]: time="2025-09-12T17:07:45.349283916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:45.354240 containerd[2017]: time="2025-09-12T17:07:45.354182628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.215964292s" Sep 12 17:07:45.354450 containerd[2017]: time="2025-09-12T17:07:45.354420300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:07:45.363051 containerd[2017]: time="2025-09-12T17:07:45.362950080Z" level=info msg="CreateContainer within sandbox \"f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:07:45.377348 containerd[2017]: time="2025-09-12T17:07:45.377090124Z" level=info msg="Container e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:07:45.386130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288450721.mount: Deactivated successfully. Sep 12 17:07:45.401870 containerd[2017]: time="2025-09-12T17:07:45.401762628Z" level=info msg="CreateContainer within sandbox \"f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9\"" Sep 12 17:07:45.403179 containerd[2017]: time="2025-09-12T17:07:45.403075152Z" level=info msg="StartContainer for \"e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9\"" Sep 12 17:07:45.409130 containerd[2017]: time="2025-09-12T17:07:45.409020228Z" level=info msg="connecting to shim e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9" address="unix:///run/containerd/s/60e0ea2f5426da350b6a2d8b0939c651dcbfd735f9551127064146b1932efb24" protocol=ttrpc version=3 Sep 12 17:07:45.449131 systemd[1]: Started cri-containerd-e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9.scope - libcontainer container e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9. Sep 12 17:07:45.544310 containerd[2017]: time="2025-09-12T17:07:45.544101241Z" level=info msg="StartContainer for \"e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9\" returns successfully" Sep 12 17:07:45.589149 kubelet[3645]: E0912 17:07:45.588539 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-drgp6" podUID="3451646a-5365-4f0c-8470-08bd6eac7042" Sep 12 17:07:46.772242 containerd[2017]: time="2025-09-12T17:07:46.772175583Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:07:46.777267 systemd[1]: cri-containerd-e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9.scope: Deactivated successfully. Sep 12 17:07:46.778734 systemd[1]: cri-containerd-e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9.scope: Consumed 962ms CPU time, 184.2M memory peak, 165.8M written to disk. Sep 12 17:07:46.782304 containerd[2017]: time="2025-09-12T17:07:46.781694367Z" level=info msg="received exit event container_id:\"e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9\" id:\"e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9\" pid:4372 exited_at:{seconds:1757696866 nanos:781339995}" Sep 12 17:07:46.782570 containerd[2017]: time="2025-09-12T17:07:46.782529771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9\" id:\"e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9\" pid:4372 exited_at:{seconds:1757696866 nanos:781339995}" Sep 12 17:07:46.822141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1b24c32079418a00de9333405afed7286f66f4c2326adb3e4ca9e31e68449d9-rootfs.mount: Deactivated successfully. Sep 12 17:07:46.841327 kubelet[3645]: I0912 17:07:46.838323 3645 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:07:46.958710 systemd[1]: Created slice kubepods-burstable-podf964977e_1efd_4336_bed4_9aaaf25a614d.slice - libcontainer container kubepods-burstable-podf964977e_1efd_4336_bed4_9aaaf25a614d.slice. Sep 12 17:07:47.052543 systemd[1]: Created slice kubepods-burstable-pod7dba5676_b81c_4c99_a964_12a04841c7f1.slice - libcontainer container kubepods-burstable-pod7dba5676_b81c_4c99_a964_12a04841c7f1.slice. Sep 12 17:07:47.075338 kubelet[3645]: I0912 17:07:47.075210 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr2mc\" (UniqueName: \"kubernetes.io/projected/f964977e-1efd-4336-bed4-9aaaf25a614d-kube-api-access-sr2mc\") pod \"coredns-674b8bbfcf-d2m7b\" (UID: \"f964977e-1efd-4336-bed4-9aaaf25a614d\") " pod="kube-system/coredns-674b8bbfcf-d2m7b" Sep 12 17:07:47.075338 kubelet[3645]: I0912 17:07:47.075309 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbmb5\" (UniqueName: \"kubernetes.io/projected/d9593beb-c142-41ba-95c3-de6f41b7c5f1-kube-api-access-gbmb5\") pod \"calico-apiserver-6b56fc6589-9xsvp\" (UID: \"d9593beb-c142-41ba-95c3-de6f41b7c5f1\") " pod="calico-apiserver/calico-apiserver-6b56fc6589-9xsvp" Sep 12 17:07:47.100409 kubelet[3645]: I0912 17:07:47.075395 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9593beb-c142-41ba-95c3-de6f41b7c5f1-calico-apiserver-certs\") pod \"calico-apiserver-6b56fc6589-9xsvp\" (UID: \"d9593beb-c142-41ba-95c3-de6f41b7c5f1\") " pod="calico-apiserver/calico-apiserver-6b56fc6589-9xsvp" Sep 12 17:07:47.100409 kubelet[3645]: I0912 17:07:47.075464 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f964977e-1efd-4336-bed4-9aaaf25a614d-config-volume\") pod \"coredns-674b8bbfcf-d2m7b\" (UID: \"f964977e-1efd-4336-bed4-9aaaf25a614d\") " pod="kube-system/coredns-674b8bbfcf-d2m7b" Sep 12 17:07:47.143718 systemd[1]: Created slice kubepods-besteffort-podd9593beb_c142_41ba_95c3_de6f41b7c5f1.slice - libcontainer container kubepods-besteffort-podd9593beb_c142_41ba_95c3_de6f41b7c5f1.slice. Sep 12 17:07:47.177535 kubelet[3645]: I0912 17:07:47.176891 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dba5676-b81c-4c99-a964-12a04841c7f1-config-volume\") pod \"coredns-674b8bbfcf-b2lt9\" (UID: \"7dba5676-b81c-4c99-a964-12a04841c7f1\") " pod="kube-system/coredns-674b8bbfcf-b2lt9" Sep 12 17:07:47.177535 kubelet[3645]: I0912 17:07:47.176975 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k684n\" (UniqueName: \"kubernetes.io/projected/7dba5676-b81c-4c99-a964-12a04841c7f1-kube-api-access-k684n\") pod \"coredns-674b8bbfcf-b2lt9\" (UID: \"7dba5676-b81c-4c99-a964-12a04841c7f1\") " pod="kube-system/coredns-674b8bbfcf-b2lt9" Sep 12 17:07:47.263199 systemd[1]: Created slice kubepods-besteffort-podc20d52c1_4f15_4e38_97b6_f025b6331f9b.slice - libcontainer container kubepods-besteffort-podc20d52c1_4f15_4e38_97b6_f025b6331f9b.slice. Sep 12 17:07:47.277384 kubelet[3645]: I0912 17:07:47.277345 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d403a1e9-6639-4e3b-948a-4d6dfadcb895-tigera-ca-bundle\") pod \"calico-kube-controllers-796cbb8599-ddptz\" (UID: \"d403a1e9-6639-4e3b-948a-4d6dfadcb895\") " pod="calico-system/calico-kube-controllers-796cbb8599-ddptz" Sep 12 17:07:47.278738 kubelet[3645]: I0912 17:07:47.278652 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c20d52c1-4f15-4e38-97b6-f025b6331f9b-calico-apiserver-certs\") pod \"calico-apiserver-6c7db7dcb6-hwncn\" (UID: \"c20d52c1-4f15-4e38-97b6-f025b6331f9b\") " pod="calico-apiserver/calico-apiserver-6c7db7dcb6-hwncn" Sep 12 17:07:47.279096 kubelet[3645]: I0912 17:07:47.279051 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdpz\" (UniqueName: \"kubernetes.io/projected/d403a1e9-6639-4e3b-948a-4d6dfadcb895-kube-api-access-czdpz\") pod \"calico-kube-controllers-796cbb8599-ddptz\" (UID: \"d403a1e9-6639-4e3b-948a-4d6dfadcb895\") " pod="calico-system/calico-kube-controllers-796cbb8599-ddptz" Sep 12 17:07:47.281125 kubelet[3645]: I0912 17:07:47.280923 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqmr\" (UniqueName: \"kubernetes.io/projected/c20d52c1-4f15-4e38-97b6-f025b6331f9b-kube-api-access-kmqmr\") pod \"calico-apiserver-6c7db7dcb6-hwncn\" (UID: \"c20d52c1-4f15-4e38-97b6-f025b6331f9b\") " pod="calico-apiserver/calico-apiserver-6c7db7dcb6-hwncn" Sep 12 17:07:47.286367 containerd[2017]: time="2025-09-12T17:07:47.286155361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d2m7b,Uid:f964977e-1efd-4336-bed4-9aaaf25a614d,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:47.286402 systemd[1]: Created slice kubepods-besteffort-podd403a1e9_6639_4e3b_948a_4d6dfadcb895.slice - libcontainer container kubepods-besteffort-podd403a1e9_6639_4e3b_948a_4d6dfadcb895.slice. Sep 12 17:07:47.314915 systemd[1]: Created slice kubepods-besteffort-pod49e369fe_fd16_48c1_8e18_a5bb62384f90.slice - libcontainer container kubepods-besteffort-pod49e369fe_fd16_48c1_8e18_a5bb62384f90.slice. Sep 12 17:07:47.358540 systemd[1]: Created slice kubepods-besteffort-pod3c3f65ab_245e_480f_be16_5f1c4c67a27c.slice - libcontainer container kubepods-besteffort-pod3c3f65ab_245e_480f_be16_5f1c4c67a27c.slice. Sep 12 17:07:47.366283 containerd[2017]: time="2025-09-12T17:07:47.365834654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2lt9,Uid:7dba5676-b81c-4c99-a964-12a04841c7f1,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:47.410175 kubelet[3645]: I0912 17:07:47.381414 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt8kc\" (UniqueName: \"kubernetes.io/projected/49e369fe-fd16-48c1-8e18-a5bb62384f90-kube-api-access-jt8kc\") pod \"whisker-5dc6f45979-jc5bd\" (UID: \"49e369fe-fd16-48c1-8e18-a5bb62384f90\") " pod="calico-system/whisker-5dc6f45979-jc5bd" Sep 12 17:07:47.410175 kubelet[3645]: I0912 17:07:47.381517 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-backend-key-pair\") pod \"whisker-5dc6f45979-jc5bd\" (UID: \"49e369fe-fd16-48c1-8e18-a5bb62384f90\") " pod="calico-system/whisker-5dc6f45979-jc5bd" Sep 12 17:07:47.410175 kubelet[3645]: I0912 17:07:47.381563 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-ca-bundle\") pod \"whisker-5dc6f45979-jc5bd\" (UID: \"49e369fe-fd16-48c1-8e18-a5bb62384f90\") " pod="calico-system/whisker-5dc6f45979-jc5bd" Sep 12 17:07:47.464611 systemd[1]: Created slice kubepods-besteffort-pod8ac2608c_1fe0_4dc8_a918_dfae01ff6391.slice - libcontainer container kubepods-besteffort-pod8ac2608c_1fe0_4dc8_a918_dfae01ff6391.slice. Sep 12 17:07:47.475263 containerd[2017]: time="2025-09-12T17:07:47.475155314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-9xsvp,Uid:d9593beb-c142-41ba-95c3-de6f41b7c5f1,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:07:47.483864 kubelet[3645]: I0912 17:07:47.483109 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57lb\" (UniqueName: \"kubernetes.io/projected/3c3f65ab-245e-480f-be16-5f1c4c67a27c-kube-api-access-v57lb\") pod \"calico-apiserver-6b56fc6589-wgdrv\" (UID: \"3c3f65ab-245e-480f-be16-5f1c4c67a27c\") " pod="calico-apiserver/calico-apiserver-6b56fc6589-wgdrv" Sep 12 17:07:47.483864 kubelet[3645]: I0912 17:07:47.483210 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c3f65ab-245e-480f-be16-5f1c4c67a27c-calico-apiserver-certs\") pod \"calico-apiserver-6b56fc6589-wgdrv\" (UID: \"3c3f65ab-245e-480f-be16-5f1c4c67a27c\") " pod="calico-apiserver/calico-apiserver-6b56fc6589-wgdrv" Sep 12 17:07:47.575921 containerd[2017]: time="2025-09-12T17:07:47.575362731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db7dcb6-hwncn,Uid:c20d52c1-4f15-4e38-97b6-f025b6331f9b,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:07:47.583678 kubelet[3645]: I0912 17:07:47.583623 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ac2608c-1fe0-4dc8-a918-dfae01ff6391-config\") pod \"goldmane-54d579b49d-kdphf\" (UID: \"8ac2608c-1fe0-4dc8-a918-dfae01ff6391\") " pod="calico-system/goldmane-54d579b49d-kdphf" Sep 12 17:07:47.585837 kubelet[3645]: I0912 17:07:47.585117 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8ac2608c-1fe0-4dc8-a918-dfae01ff6391-goldmane-key-pair\") pod \"goldmane-54d579b49d-kdphf\" (UID: \"8ac2608c-1fe0-4dc8-a918-dfae01ff6391\") " pod="calico-system/goldmane-54d579b49d-kdphf" Sep 12 17:07:47.585837 kubelet[3645]: I0912 17:07:47.585176 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njcgq\" (UniqueName: \"kubernetes.io/projected/8ac2608c-1fe0-4dc8-a918-dfae01ff6391-kube-api-access-njcgq\") pod \"goldmane-54d579b49d-kdphf\" (UID: \"8ac2608c-1fe0-4dc8-a918-dfae01ff6391\") " pod="calico-system/goldmane-54d579b49d-kdphf" Sep 12 17:07:47.585837 kubelet[3645]: I0912 17:07:47.585224 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac2608c-1fe0-4dc8-a918-dfae01ff6391-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-kdphf\" (UID: \"8ac2608c-1fe0-4dc8-a918-dfae01ff6391\") " pod="calico-system/goldmane-54d579b49d-kdphf" Sep 12 17:07:47.620576 systemd[1]: Created slice kubepods-besteffort-pod3451646a_5365_4f0c_8470_08bd6eac7042.slice - libcontainer container kubepods-besteffort-pod3451646a_5365_4f0c_8470_08bd6eac7042.slice. Sep 12 17:07:47.628610 containerd[2017]: time="2025-09-12T17:07:47.628545939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-drgp6,Uid:3451646a-5365-4f0c-8470-08bd6eac7042,Namespace:calico-system,Attempt:0,}" Sep 12 17:07:47.628795 containerd[2017]: time="2025-09-12T17:07:47.628545879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc6f45979-jc5bd,Uid:49e369fe-fd16-48c1-8e18-a5bb62384f90,Namespace:calico-system,Attempt:0,}" Sep 12 17:07:47.629243 containerd[2017]: time="2025-09-12T17:07:47.629095179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796cbb8599-ddptz,Uid:d403a1e9-6639-4e3b-948a-4d6dfadcb895,Namespace:calico-system,Attempt:0,}" Sep 12 17:07:47.637823 containerd[2017]: time="2025-09-12T17:07:47.637732923Z" level=error msg="Failed to destroy network for sandbox \"21129147c527551b8f5b2b22dcbb1ff326e8817941d32929b8ad5c054ed0b05c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:47.717286 containerd[2017]: time="2025-09-12T17:07:47.717234099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-wgdrv,Uid:3c3f65ab-245e-480f-be16-5f1c4c67a27c,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:07:47.753433 containerd[2017]: time="2025-09-12T17:07:47.751659868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d2m7b,Uid:f964977e-1efd-4336-bed4-9aaaf25a614d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21129147c527551b8f5b2b22dcbb1ff326e8817941d32929b8ad5c054ed0b05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:47.753711 kubelet[3645]: E0912 17:07:47.753628 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21129147c527551b8f5b2b22dcbb1ff326e8817941d32929b8ad5c054ed0b05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:47.753853 kubelet[3645]: E0912 17:07:47.753717 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21129147c527551b8f5b2b22dcbb1ff326e8817941d32929b8ad5c054ed0b05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d2m7b" Sep 12 17:07:47.753979 kubelet[3645]: E0912 17:07:47.753755 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21129147c527551b8f5b2b22dcbb1ff326e8817941d32929b8ad5c054ed0b05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d2m7b" Sep 12 17:07:47.754075 kubelet[3645]: E0912 17:07:47.754024 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-d2m7b_kube-system(f964977e-1efd-4336-bed4-9aaaf25a614d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-d2m7b_kube-system(f964977e-1efd-4336-bed4-9aaaf25a614d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21129147c527551b8f5b2b22dcbb1ff326e8817941d32929b8ad5c054ed0b05c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d2m7b" podUID="f964977e-1efd-4336-bed4-9aaaf25a614d" Sep 12 17:07:47.779631 containerd[2017]: time="2025-09-12T17:07:47.779509792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kdphf,Uid:8ac2608c-1fe0-4dc8-a918-dfae01ff6391,Namespace:calico-system,Attempt:0,}" Sep 12 17:07:47.937907 containerd[2017]: time="2025-09-12T17:07:47.937425593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:07:48.200804 containerd[2017]: time="2025-09-12T17:07:48.198604322Z" level=error msg="Failed to destroy network for sandbox \"7ac6849e7aafad843beb4a3b57ab55dcdbffc7ae80defa29ce74ad11f8ef60cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.208284 systemd[1]: run-netns-cni\x2de687c467\x2d5554\x2d63e3\x2d8179\x2dba26c7058305.mount: Deactivated successfully. Sep 12 17:07:48.213055 containerd[2017]: time="2025-09-12T17:07:48.212920250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-wgdrv,Uid:3c3f65ab-245e-480f-be16-5f1c4c67a27c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ac6849e7aafad843beb4a3b57ab55dcdbffc7ae80defa29ce74ad11f8ef60cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.213367 kubelet[3645]: E0912 17:07:48.213238 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ac6849e7aafad843beb4a3b57ab55dcdbffc7ae80defa29ce74ad11f8ef60cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.213367 kubelet[3645]: E0912 17:07:48.213332 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ac6849e7aafad843beb4a3b57ab55dcdbffc7ae80defa29ce74ad11f8ef60cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56fc6589-wgdrv" Sep 12 17:07:48.214263 kubelet[3645]: E0912 17:07:48.213385 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ac6849e7aafad843beb4a3b57ab55dcdbffc7ae80defa29ce74ad11f8ef60cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56fc6589-wgdrv" Sep 12 17:07:48.214263 kubelet[3645]: E0912 17:07:48.213460 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b56fc6589-wgdrv_calico-apiserver(3c3f65ab-245e-480f-be16-5f1c4c67a27c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b56fc6589-wgdrv_calico-apiserver(3c3f65ab-245e-480f-be16-5f1c4c67a27c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ac6849e7aafad843beb4a3b57ab55dcdbffc7ae80defa29ce74ad11f8ef60cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b56fc6589-wgdrv" podUID="3c3f65ab-245e-480f-be16-5f1c4c67a27c" Sep 12 17:07:48.257826 containerd[2017]: time="2025-09-12T17:07:48.252747278Z" level=error msg="Failed to destroy network for sandbox \"004f4881054ebddb3b1f340342126934b7aeb530dcc864ed94bbeb331cc115c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.264275 systemd[1]: run-netns-cni\x2d0a5ec3f8\x2d206f\x2dd9d3\x2dfea7\x2dfdcc88fa0b37.mount: Deactivated successfully. Sep 12 17:07:48.265401 containerd[2017]: time="2025-09-12T17:07:48.265136762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2lt9,Uid:7dba5676-b81c-4c99-a964-12a04841c7f1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"004f4881054ebddb3b1f340342126934b7aeb530dcc864ed94bbeb331cc115c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.265648 kubelet[3645]: E0912 17:07:48.265525 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"004f4881054ebddb3b1f340342126934b7aeb530dcc864ed94bbeb331cc115c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.265867 kubelet[3645]: E0912 17:07:48.265691 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"004f4881054ebddb3b1f340342126934b7aeb530dcc864ed94bbeb331cc115c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b2lt9" Sep 12 17:07:48.265867 kubelet[3645]: E0912 17:07:48.265790 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"004f4881054ebddb3b1f340342126934b7aeb530dcc864ed94bbeb331cc115c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b2lt9" Sep 12 17:07:48.265997 kubelet[3645]: E0912 17:07:48.265905 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b2lt9_kube-system(7dba5676-b81c-4c99-a964-12a04841c7f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b2lt9_kube-system(7dba5676-b81c-4c99-a964-12a04841c7f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"004f4881054ebddb3b1f340342126934b7aeb530dcc864ed94bbeb331cc115c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b2lt9" podUID="7dba5676-b81c-4c99-a964-12a04841c7f1" Sep 12 17:07:48.290805 containerd[2017]: time="2025-09-12T17:07:48.290717438Z" level=error msg="Failed to destroy network for sandbox \"850ff03b64d5161e1866efe86a2aa5d72278c8843afe529af9c22d72e3f8572c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.300585 containerd[2017]: time="2025-09-12T17:07:48.300511670Z" level=error msg="Failed to destroy network for sandbox \"b3b198e32268538dcf5cb5ca6961b7f06cc0a0ba17c6f7c2be96029010b4d4da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.304247 systemd[1]: run-netns-cni\x2d9888dafd\x2dde6a\x2dad59\x2d2a77\x2d156f22d7c7da.mount: Deactivated successfully. Sep 12 17:07:48.310727 containerd[2017]: time="2025-09-12T17:07:48.310665626Z" level=error msg="Failed to destroy network for sandbox \"ff6451be457c4919bb2e8f9f9259a82d099e438f046bb8d268e5557b7ead633e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.315549 systemd[1]: run-netns-cni\x2d31f47ce9\x2dacfb\x2d35c8\x2d72e7\x2de33d45019ae3.mount: Deactivated successfully. Sep 12 17:07:48.317487 containerd[2017]: time="2025-09-12T17:07:48.316935818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db7dcb6-hwncn,Uid:c20d52c1-4f15-4e38-97b6-f025b6331f9b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"850ff03b64d5161e1866efe86a2aa5d72278c8843afe529af9c22d72e3f8572c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.319453 kubelet[3645]: E0912 17:07:48.319377 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"850ff03b64d5161e1866efe86a2aa5d72278c8843afe529af9c22d72e3f8572c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.319580 kubelet[3645]: E0912 17:07:48.319490 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"850ff03b64d5161e1866efe86a2aa5d72278c8843afe529af9c22d72e3f8572c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7db7dcb6-hwncn" Sep 12 17:07:48.319580 kubelet[3645]: E0912 17:07:48.319551 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"850ff03b64d5161e1866efe86a2aa5d72278c8843afe529af9c22d72e3f8572c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7db7dcb6-hwncn" Sep 12 17:07:48.320904 kubelet[3645]: E0912 17:07:48.319672 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7db7dcb6-hwncn_calico-apiserver(c20d52c1-4f15-4e38-97b6-f025b6331f9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7db7dcb6-hwncn_calico-apiserver(c20d52c1-4f15-4e38-97b6-f025b6331f9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"850ff03b64d5161e1866efe86a2aa5d72278c8843afe529af9c22d72e3f8572c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7db7dcb6-hwncn" podUID="c20d52c1-4f15-4e38-97b6-f025b6331f9b" Sep 12 17:07:48.321363 containerd[2017]: time="2025-09-12T17:07:48.321240530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-9xsvp,Uid:d9593beb-c142-41ba-95c3-de6f41b7c5f1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b198e32268538dcf5cb5ca6961b7f06cc0a0ba17c6f7c2be96029010b4d4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.321931 containerd[2017]: time="2025-09-12T17:07:48.321759830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc6f45979-jc5bd,Uid:49e369fe-fd16-48c1-8e18-a5bb62384f90,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff6451be457c4919bb2e8f9f9259a82d099e438f046bb8d268e5557b7ead633e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.324158 kubelet[3645]: E0912 17:07:48.324088 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff6451be457c4919bb2e8f9f9259a82d099e438f046bb8d268e5557b7ead633e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.324284 kubelet[3645]: E0912 17:07:48.324179 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff6451be457c4919bb2e8f9f9259a82d099e438f046bb8d268e5557b7ead633e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dc6f45979-jc5bd" Sep 12 17:07:48.324284 kubelet[3645]: E0912 17:07:48.324214 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff6451be457c4919bb2e8f9f9259a82d099e438f046bb8d268e5557b7ead633e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dc6f45979-jc5bd" Sep 12 17:07:48.324433 kubelet[3645]: E0912 17:07:48.324298 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dc6f45979-jc5bd_calico-system(49e369fe-fd16-48c1-8e18-a5bb62384f90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dc6f45979-jc5bd_calico-system(49e369fe-fd16-48c1-8e18-a5bb62384f90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff6451be457c4919bb2e8f9f9259a82d099e438f046bb8d268e5557b7ead633e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dc6f45979-jc5bd" podUID="49e369fe-fd16-48c1-8e18-a5bb62384f90" Sep 12 17:07:48.324691 kubelet[3645]: E0912 17:07:48.323957 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b198e32268538dcf5cb5ca6961b7f06cc0a0ba17c6f7c2be96029010b4d4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.324756 kubelet[3645]: E0912 17:07:48.324714 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b198e32268538dcf5cb5ca6961b7f06cc0a0ba17c6f7c2be96029010b4d4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56fc6589-9xsvp" Sep 12 17:07:48.324890 kubelet[3645]: E0912 17:07:48.324751 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b198e32268538dcf5cb5ca6961b7f06cc0a0ba17c6f7c2be96029010b4d4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b56fc6589-9xsvp" Sep 12 17:07:48.324890 kubelet[3645]: E0912 17:07:48.324849 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b56fc6589-9xsvp_calico-apiserver(d9593beb-c142-41ba-95c3-de6f41b7c5f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b56fc6589-9xsvp_calico-apiserver(d9593beb-c142-41ba-95c3-de6f41b7c5f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3b198e32268538dcf5cb5ca6961b7f06cc0a0ba17c6f7c2be96029010b4d4da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b56fc6589-9xsvp" podUID="d9593beb-c142-41ba-95c3-de6f41b7c5f1" Sep 12 17:07:48.342576 containerd[2017]: time="2025-09-12T17:07:48.342219591Z" level=error msg="Failed to destroy network for sandbox \"8ccddcd092acf3ef8358fcd1ceff2527662e99b3d42dfd4ba78cb2e9d45732dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.342747 containerd[2017]: time="2025-09-12T17:07:48.342613647Z" level=error msg="Failed to destroy network for sandbox \"8db516f4fa15ce2fc5ec3b6547dbbaba635f5085e25f0bb32f4751369d54b281\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.345970 containerd[2017]: time="2025-09-12T17:07:48.345695367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796cbb8599-ddptz,Uid:d403a1e9-6639-4e3b-948a-4d6dfadcb895,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db516f4fa15ce2fc5ec3b6547dbbaba635f5085e25f0bb32f4751369d54b281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.346673 kubelet[3645]: E0912 17:07:48.346621 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db516f4fa15ce2fc5ec3b6547dbbaba635f5085e25f0bb32f4751369d54b281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.347350 kubelet[3645]: E0912 17:07:48.347072 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db516f4fa15ce2fc5ec3b6547dbbaba635f5085e25f0bb32f4751369d54b281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796cbb8599-ddptz" Sep 12 17:07:48.347350 kubelet[3645]: E0912 17:07:48.347121 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db516f4fa15ce2fc5ec3b6547dbbaba635f5085e25f0bb32f4751369d54b281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796cbb8599-ddptz" Sep 12 17:07:48.347350 kubelet[3645]: E0912 17:07:48.347222 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-796cbb8599-ddptz_calico-system(d403a1e9-6639-4e3b-948a-4d6dfadcb895)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-796cbb8599-ddptz_calico-system(d403a1e9-6639-4e3b-948a-4d6dfadcb895)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8db516f4fa15ce2fc5ec3b6547dbbaba635f5085e25f0bb32f4751369d54b281\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-796cbb8599-ddptz" podUID="d403a1e9-6639-4e3b-948a-4d6dfadcb895" Sep 12 17:07:48.349060 containerd[2017]: time="2025-09-12T17:07:48.348888063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-drgp6,Uid:3451646a-5365-4f0c-8470-08bd6eac7042,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ccddcd092acf3ef8358fcd1ceff2527662e99b3d42dfd4ba78cb2e9d45732dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.350428 kubelet[3645]: E0912 17:07:48.349614 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ccddcd092acf3ef8358fcd1ceff2527662e99b3d42dfd4ba78cb2e9d45732dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.350428 kubelet[3645]: E0912 17:07:48.349683 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ccddcd092acf3ef8358fcd1ceff2527662e99b3d42dfd4ba78cb2e9d45732dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-drgp6" Sep 12 17:07:48.350428 kubelet[3645]: E0912 17:07:48.349716 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ccddcd092acf3ef8358fcd1ceff2527662e99b3d42dfd4ba78cb2e9d45732dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-drgp6" Sep 12 17:07:48.350738 kubelet[3645]: E0912 17:07:48.349812 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-drgp6_calico-system(3451646a-5365-4f0c-8470-08bd6eac7042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-drgp6_calico-system(3451646a-5365-4f0c-8470-08bd6eac7042)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ccddcd092acf3ef8358fcd1ceff2527662e99b3d42dfd4ba78cb2e9d45732dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-drgp6" podUID="3451646a-5365-4f0c-8470-08bd6eac7042" Sep 12 17:07:48.362000 containerd[2017]: time="2025-09-12T17:07:48.361924743Z" level=error msg="Failed to destroy network for sandbox \"fc83665866a216a7241b2ddc77aabeffabc39f6e7ad9705445b4cf5a3ae8bf7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.363615 containerd[2017]: time="2025-09-12T17:07:48.363547347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kdphf,Uid:8ac2608c-1fe0-4dc8-a918-dfae01ff6391,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc83665866a216a7241b2ddc77aabeffabc39f6e7ad9705445b4cf5a3ae8bf7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.364431 kubelet[3645]: E0912 17:07:48.363928 3645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc83665866a216a7241b2ddc77aabeffabc39f6e7ad9705445b4cf5a3ae8bf7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:07:48.364431 kubelet[3645]: E0912 17:07:48.364038 3645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc83665866a216a7241b2ddc77aabeffabc39f6e7ad9705445b4cf5a3ae8bf7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-kdphf" Sep 12 17:07:48.364431 kubelet[3645]: E0912 17:07:48.364075 3645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc83665866a216a7241b2ddc77aabeffabc39f6e7ad9705445b4cf5a3ae8bf7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-kdphf" Sep 12 17:07:48.364639 kubelet[3645]: E0912 17:07:48.364247 3645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-kdphf_calico-system(8ac2608c-1fe0-4dc8-a918-dfae01ff6391)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-kdphf_calico-system(8ac2608c-1fe0-4dc8-a918-dfae01ff6391)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc83665866a216a7241b2ddc77aabeffabc39f6e7ad9705445b4cf5a3ae8bf7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-kdphf" podUID="8ac2608c-1fe0-4dc8-a918-dfae01ff6391" Sep 12 17:07:48.822034 systemd[1]: run-netns-cni\x2d90554172\x2d77f5\x2d5f1c\x2d23f7\x2d19327bb17d11.mount: Deactivated successfully. Sep 12 17:07:48.822235 systemd[1]: run-netns-cni\x2d387aab63\x2d7e67\x2ddff9\x2d219a\x2d195c7c47e74b.mount: Deactivated successfully. Sep 12 17:07:48.822360 systemd[1]: run-netns-cni\x2de34e8281\x2d8d7d\x2d5658\x2d6423\x2dbdc105551f8c.mount: Deactivated successfully. Sep 12 17:07:48.822478 systemd[1]: run-netns-cni\x2d7a22dd9e\x2dbc60\x2d9cab\x2da136\x2da76be4501d12.mount: Deactivated successfully. Sep 12 17:07:54.536172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255128995.mount: Deactivated successfully. Sep 12 17:07:54.585582 containerd[2017]: time="2025-09-12T17:07:54.585521590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:54.586638 containerd[2017]: time="2025-09-12T17:07:54.586541914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:07:54.588628 containerd[2017]: time="2025-09-12T17:07:54.588571522Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:54.593789 containerd[2017]: time="2025-09-12T17:07:54.592490362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:54.593789 containerd[2017]: time="2025-09-12T17:07:54.593615794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.656121753s" Sep 12 17:07:54.593789 containerd[2017]: time="2025-09-12T17:07:54.593660194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:07:54.630243 containerd[2017]: time="2025-09-12T17:07:54.630194614Z" level=info msg="CreateContainer within sandbox \"f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:07:54.657108 containerd[2017]: time="2025-09-12T17:07:54.657051622Z" level=info msg="Container 1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:07:54.678958 containerd[2017]: time="2025-09-12T17:07:54.678897598Z" level=info msg="CreateContainer within sandbox \"f6b368a1dc83137a205af2e5e1fe38fab4566803361e685c7f944b1e2dd3fb87\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58\"" Sep 12 17:07:54.681167 containerd[2017]: time="2025-09-12T17:07:54.681118114Z" level=info msg="StartContainer for \"1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58\"" Sep 12 17:07:54.684738 containerd[2017]: time="2025-09-12T17:07:54.684679870Z" level=info msg="connecting to shim 1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58" address="unix:///run/containerd/s/60e0ea2f5426da350b6a2d8b0939c651dcbfd735f9551127064146b1932efb24" protocol=ttrpc version=3 Sep 12 17:07:54.726296 systemd[1]: Started cri-containerd-1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58.scope - libcontainer container 1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58. Sep 12 17:07:54.836890 containerd[2017]: time="2025-09-12T17:07:54.836109455Z" level=info msg="StartContainer for \"1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58\" returns successfully" Sep 12 17:07:55.006803 kubelet[3645]: I0912 17:07:55.005480 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jw8fk" podStartSLOduration=2.906439928 podStartE2EDuration="19.005452352s" podCreationTimestamp="2025-09-12 17:07:36 +0000 UTC" firstStartedPulling="2025-09-12 17:07:38.495936306 +0000 UTC m=+33.214743670" lastFinishedPulling="2025-09-12 17:07:54.594948718 +0000 UTC m=+49.313756094" observedRunningTime="2025-09-12 17:07:55.005130032 +0000 UTC m=+49.723937432" watchObservedRunningTime="2025-09-12 17:07:55.005452352 +0000 UTC m=+49.724259716" Sep 12 17:07:55.216698 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:07:55.216963 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:07:55.555442 kubelet[3645]: I0912 17:07:55.554262 3645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-ca-bundle\") pod \"49e369fe-fd16-48c1-8e18-a5bb62384f90\" (UID: \"49e369fe-fd16-48c1-8e18-a5bb62384f90\") " Sep 12 17:07:55.555818 kubelet[3645]: I0912 17:07:55.555362 3645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "49e369fe-fd16-48c1-8e18-a5bb62384f90" (UID: "49e369fe-fd16-48c1-8e18-a5bb62384f90"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:07:55.556158 kubelet[3645]: I0912 17:07:55.555999 3645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-backend-key-pair\") pod \"49e369fe-fd16-48c1-8e18-a5bb62384f90\" (UID: \"49e369fe-fd16-48c1-8e18-a5bb62384f90\") " Sep 12 17:07:55.556838 kubelet[3645]: I0912 17:07:55.556705 3645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt8kc\" (UniqueName: \"kubernetes.io/projected/49e369fe-fd16-48c1-8e18-a5bb62384f90-kube-api-access-jt8kc\") pod \"49e369fe-fd16-48c1-8e18-a5bb62384f90\" (UID: \"49e369fe-fd16-48c1-8e18-a5bb62384f90\") " Sep 12 17:07:55.560824 kubelet[3645]: I0912 17:07:55.560099 3645 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-ca-bundle\") on node \"ip-172-31-16-146\" DevicePath \"\"" Sep 12 17:07:55.572440 kubelet[3645]: I0912 17:07:55.571802 3645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "49e369fe-fd16-48c1-8e18-a5bb62384f90" (UID: "49e369fe-fd16-48c1-8e18-a5bb62384f90"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:07:55.572656 systemd[1]: var-lib-kubelet-pods-49e369fe\x2dfd16\x2d48c1\x2d8e18\x2da5bb62384f90-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:07:55.578876 kubelet[3645]: I0912 17:07:55.578748 3645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e369fe-fd16-48c1-8e18-a5bb62384f90-kube-api-access-jt8kc" (OuterVolumeSpecName: "kube-api-access-jt8kc") pod "49e369fe-fd16-48c1-8e18-a5bb62384f90" (UID: "49e369fe-fd16-48c1-8e18-a5bb62384f90"). InnerVolumeSpecName "kube-api-access-jt8kc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:07:55.580749 systemd[1]: var-lib-kubelet-pods-49e369fe\x2dfd16\x2d48c1\x2d8e18\x2da5bb62384f90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djt8kc.mount: Deactivated successfully. Sep 12 17:07:55.612819 systemd[1]: Removed slice kubepods-besteffort-pod49e369fe_fd16_48c1_8e18_a5bb62384f90.slice - libcontainer container kubepods-besteffort-pod49e369fe_fd16_48c1_8e18_a5bb62384f90.slice. Sep 12 17:07:55.662932 kubelet[3645]: I0912 17:07:55.662841 3645 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/49e369fe-fd16-48c1-8e18-a5bb62384f90-whisker-backend-key-pair\") on node \"ip-172-31-16-146\" DevicePath \"\"" Sep 12 17:07:55.662932 kubelet[3645]: I0912 17:07:55.662893 3645 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jt8kc\" (UniqueName: \"kubernetes.io/projected/49e369fe-fd16-48c1-8e18-a5bb62384f90-kube-api-access-jt8kc\") on node \"ip-172-31-16-146\" DevicePath \"\"" Sep 12 17:07:56.125754 systemd[1]: Created slice kubepods-besteffort-pod2ae8ddde_7ddb_4b35_b1b0_1e28e8584c5b.slice - libcontainer container kubepods-besteffort-pod2ae8ddde_7ddb_4b35_b1b0_1e28e8584c5b.slice. Sep 12 17:07:56.270589 kubelet[3645]: I0912 17:07:56.270529 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b-whisker-backend-key-pair\") pod \"whisker-5b8b8fc866-jgbzs\" (UID: \"2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b\") " pod="calico-system/whisker-5b8b8fc866-jgbzs" Sep 12 17:07:56.271399 kubelet[3645]: I0912 17:07:56.271366 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b-whisker-ca-bundle\") pod \"whisker-5b8b8fc866-jgbzs\" (UID: \"2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b\") " pod="calico-system/whisker-5b8b8fc866-jgbzs" Sep 12 17:07:56.271581 kubelet[3645]: I0912 17:07:56.271555 3645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgfvs\" (UniqueName: \"kubernetes.io/projected/2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b-kube-api-access-cgfvs\") pod \"whisker-5b8b8fc866-jgbzs\" (UID: \"2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b\") " pod="calico-system/whisker-5b8b8fc866-jgbzs" Sep 12 17:07:56.435691 containerd[2017]: time="2025-09-12T17:07:56.435496007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8b8fc866-jgbzs,Uid:2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b,Namespace:calico-system,Attempt:0,}" Sep 12 17:07:56.781761 (udev-worker)[4696]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:07:56.784263 systemd-networkd[1897]: cali20892b4fd3e: Link UP Sep 12 17:07:56.786256 systemd-networkd[1897]: cali20892b4fd3e: Gained carrier Sep 12 17:07:56.837998 containerd[2017]: 2025-09-12 17:07:56.478 [INFO][4726] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:07:56.837998 containerd[2017]: 2025-09-12 17:07:56.568 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0 whisker-5b8b8fc866- calico-system 2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b 978 0 2025-09-12 17:07:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b8b8fc866 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-146 whisker-5b8b8fc866-jgbzs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali20892b4fd3e [] [] }} ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-" Sep 12 17:07:56.837998 containerd[2017]: 2025-09-12 17:07:56.568 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" Sep 12 17:07:56.837998 containerd[2017]: 2025-09-12 17:07:56.675 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" HandleID="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Workload="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.675 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" HandleID="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Workload="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-146", "pod":"whisker-5b8b8fc866-jgbzs", "timestamp":"2025-09-12 17:07:56.675303192 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.675 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.676 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.676 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.692 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" host="ip-172-31-16-146" Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.701 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.708 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.712 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:56.838360 containerd[2017]: 2025-09-12 17:07:56.715 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:56.838942 containerd[2017]: 2025-09-12 17:07:56.716 [INFO][4738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" host="ip-172-31-16-146" Sep 12 17:07:56.838942 containerd[2017]: 2025-09-12 17:07:56.718 [INFO][4738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60 Sep 12 17:07:56.838942 containerd[2017]: 2025-09-12 17:07:56.725 [INFO][4738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" host="ip-172-31-16-146" Sep 12 17:07:56.838942 containerd[2017]: 2025-09-12 17:07:56.752 [INFO][4738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.65/26] block=192.168.18.64/26 handle="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" host="ip-172-31-16-146" Sep 12 17:07:56.838942 containerd[2017]: 2025-09-12 17:07:56.752 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.65/26] handle="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" host="ip-172-31-16-146" Sep 12 17:07:56.838942 containerd[2017]: 2025-09-12 17:07:56.753 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:07:56.838942 containerd[2017]: 2025-09-12 17:07:56.754 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.65/26] IPv6=[] ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" HandleID="k8s-pod-network.de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Workload="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" Sep 12 17:07:56.840370 containerd[2017]: 2025-09-12 17:07:56.765 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0", GenerateName:"whisker-5b8b8fc866-", Namespace:"calico-system", SelfLink:"", UID:"2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b8b8fc866", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"whisker-5b8b8fc866-jgbzs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20892b4fd3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:07:56.840370 containerd[2017]: 2025-09-12 17:07:56.765 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.65/32] ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" Sep 12 17:07:56.840584 containerd[2017]: 2025-09-12 17:07:56.765 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20892b4fd3e ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" Sep 12 17:07:56.840584 containerd[2017]: 2025-09-12 17:07:56.786 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" Sep 12 17:07:56.840692 containerd[2017]: 2025-09-12 17:07:56.789 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0", GenerateName:"whisker-5b8b8fc866-", Namespace:"calico-system", SelfLink:"", UID:"2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b8b8fc866", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60", Pod:"whisker-5b8b8fc866-jgbzs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20892b4fd3e", MAC:"e2:61:31:ed:74:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:07:56.843904 containerd[2017]: 2025-09-12 17:07:56.833 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" Namespace="calico-system" Pod="whisker-5b8b8fc866-jgbzs" WorkloadEndpoint="ip--172--31--16--146-k8s-whisker--5b8b8fc866--jgbzs-eth0" Sep 12 17:07:56.914174 containerd[2017]: time="2025-09-12T17:07:56.914096149Z" level=info msg="connecting to shim de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60" address="unix:///run/containerd/s/70acc44b629774315148fae603325acd6b64723460e7940a04fdd3604de479a0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:07:57.012358 systemd[1]: Started cri-containerd-de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60.scope - libcontainer container de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60. Sep 12 17:07:57.195495 containerd[2017]: time="2025-09-12T17:07:57.195426766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8b8fc866-jgbzs,Uid:2ae8ddde-7ddb-4b35-b1b0-1e28e8584c5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60\"" Sep 12 17:07:57.203050 containerd[2017]: time="2025-09-12T17:07:57.202988303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:07:57.602677 kubelet[3645]: I0912 17:07:57.602586 3645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e369fe-fd16-48c1-8e18-a5bb62384f90" path="/var/lib/kubelet/pods/49e369fe-fd16-48c1-8e18-a5bb62384f90/volumes" Sep 12 17:07:58.373318 (udev-worker)[4698]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:07:58.377035 systemd-networkd[1897]: vxlan.calico: Link UP Sep 12 17:07:58.377042 systemd-networkd[1897]: vxlan.calico: Gained carrier Sep 12 17:07:58.589018 containerd[2017]: time="2025-09-12T17:07:58.588034393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-wgdrv,Uid:3c3f65ab-245e-480f-be16-5f1c4c67a27c,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:07:58.589909 containerd[2017]: time="2025-09-12T17:07:58.589826317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d2m7b,Uid:f964977e-1efd-4336-bed4-9aaaf25a614d,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:58.651786 systemd-networkd[1897]: cali20892b4fd3e: Gained IPv6LL Sep 12 17:07:58.937856 systemd-networkd[1897]: cali34a0a1721ba: Link UP Sep 12 17:07:58.939860 systemd-networkd[1897]: cali34a0a1721ba: Gained carrier Sep 12 17:07:58.993266 containerd[2017]: 2025-09-12 17:07:58.777 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0 coredns-674b8bbfcf- kube-system f964977e-1efd-4336-bed4-9aaaf25a614d 889 0 2025-09-12 17:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-146 coredns-674b8bbfcf-d2m7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34a0a1721ba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-" Sep 12 17:07:58.993266 containerd[2017]: 2025-09-12 17:07:58.777 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" Sep 12 17:07:58.993266 containerd[2017]: 2025-09-12 17:07:58.839 [INFO][4982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" HandleID="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Workload="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.839 [INFO][4982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" HandleID="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Workload="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3820), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-146", "pod":"coredns-674b8bbfcf-d2m7b", "timestamp":"2025-09-12 17:07:58.839121303 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.839 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.840 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.840 [INFO][4982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.856 [INFO][4982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" host="ip-172-31-16-146" Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.865 [INFO][4982] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.875 [INFO][4982] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.879 [INFO][4982] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:58.993588 containerd[2017]: 2025-09-12 17:07:58.884 [INFO][4982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:58.994121 containerd[2017]: 2025-09-12 17:07:58.884 [INFO][4982] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" host="ip-172-31-16-146" Sep 12 17:07:58.994121 containerd[2017]: 2025-09-12 17:07:58.887 [INFO][4982] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e Sep 12 17:07:58.994121 containerd[2017]: 2025-09-12 17:07:58.894 [INFO][4982] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" host="ip-172-31-16-146" Sep 12 17:07:58.994121 containerd[2017]: 2025-09-12 17:07:58.910 [INFO][4982] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.66/26] block=192.168.18.64/26 handle="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" host="ip-172-31-16-146" Sep 12 17:07:58.994121 containerd[2017]: 2025-09-12 17:07:58.911 [INFO][4982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.66/26] handle="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" host="ip-172-31-16-146" Sep 12 17:07:58.994121 containerd[2017]: 2025-09-12 17:07:58.912 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:07:58.994121 containerd[2017]: 2025-09-12 17:07:58.912 [INFO][4982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.66/26] IPv6=[] ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" HandleID="k8s-pod-network.540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Workload="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" Sep 12 17:07:58.996309 containerd[2017]: 2025-09-12 17:07:58.929 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f964977e-1efd-4336-bed4-9aaaf25a614d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"coredns-674b8bbfcf-d2m7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34a0a1721ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:07:58.996309 containerd[2017]: 2025-09-12 17:07:58.930 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.66/32] ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" Sep 12 17:07:58.996309 containerd[2017]: 2025-09-12 17:07:58.930 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34a0a1721ba ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" Sep 12 17:07:58.996309 containerd[2017]: 2025-09-12 17:07:58.944 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" Sep 12 17:07:58.996309 containerd[2017]: 2025-09-12 17:07:58.948 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f964977e-1efd-4336-bed4-9aaaf25a614d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e", Pod:"coredns-674b8bbfcf-d2m7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34a0a1721ba", MAC:"a6:3e:20:c7:a8:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:07:58.996309 containerd[2017]: 2025-09-12 17:07:58.979 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" Namespace="kube-system" Pod="coredns-674b8bbfcf-d2m7b" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--d2m7b-eth0" Sep 12 17:07:59.073953 containerd[2017]: time="2025-09-12T17:07:59.073878876Z" level=info msg="connecting to shim 540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e" address="unix:///run/containerd/s/31540582fc1d1c6ac1d7951e2244cef39846304c3cff68d09e02462ff6b5479a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:07:59.171202 systemd[1]: Started cri-containerd-540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e.scope - libcontainer container 540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e. Sep 12 17:07:59.213014 systemd-networkd[1897]: cali0590857b47d: Link UP Sep 12 17:07:59.226603 systemd-networkd[1897]: cali0590857b47d: Gained carrier Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.761 [INFO][4962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0 calico-apiserver-6b56fc6589- calico-apiserver 3c3f65ab-245e-480f-be16-5f1c4c67a27c 901 0 2025-09-12 17:07:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b56fc6589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-146 calico-apiserver-6b56fc6589-wgdrv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0590857b47d [] [] }} ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.763 [INFO][4962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.873 [INFO][4976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" HandleID="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.875 [INFO][4976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" HandleID="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-146", "pod":"calico-apiserver-6b56fc6589-wgdrv", "timestamp":"2025-09-12 17:07:58.873457551 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.875 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.911 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.912 [INFO][4976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.965 [INFO][4976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:58.983 [INFO][4976] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.000 [INFO][4976] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.007 [INFO][4976] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.018 [INFO][4976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.019 [INFO][4976] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.028 [INFO][4976] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.057 [INFO][4976] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.176 [INFO][4976] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.67/26] block=192.168.18.64/26 handle="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.176 [INFO][4976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.67/26] handle="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" host="ip-172-31-16-146" Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.177 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:07:59.276993 containerd[2017]: 2025-09-12 17:07:59.177 [INFO][4976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.67/26] IPv6=[] ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" HandleID="k8s-pod-network.c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" Sep 12 17:07:59.282379 containerd[2017]: 2025-09-12 17:07:59.186 [INFO][4962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0", GenerateName:"calico-apiserver-6b56fc6589-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c3f65ab-245e-480f-be16-5f1c4c67a27c", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56fc6589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"calico-apiserver-6b56fc6589-wgdrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0590857b47d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:07:59.282379 containerd[2017]: 2025-09-12 17:07:59.186 [INFO][4962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.67/32] ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" Sep 12 17:07:59.282379 containerd[2017]: 2025-09-12 17:07:59.186 [INFO][4962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0590857b47d ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" Sep 12 17:07:59.282379 containerd[2017]: 2025-09-12 17:07:59.232 [INFO][4962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" Sep 12 17:07:59.282379 containerd[2017]: 2025-09-12 17:07:59.237 [INFO][4962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0", GenerateName:"calico-apiserver-6b56fc6589-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c3f65ab-245e-480f-be16-5f1c4c67a27c", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56fc6589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac", Pod:"calico-apiserver-6b56fc6589-wgdrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0590857b47d", MAC:"12:70:37:20:3b:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:07:59.282379 containerd[2017]: 2025-09-12 17:07:59.264 [INFO][4962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-wgdrv" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--wgdrv-eth0" Sep 12 17:07:59.351079 containerd[2017]: time="2025-09-12T17:07:59.350997709Z" level=info msg="connecting to shim c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac" address="unix:///run/containerd/s/b64dd1e80f9eabdb0302fb69f281a0f5b847193e64ed238a1c75a22b102f732f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:07:59.498837 containerd[2017]: time="2025-09-12T17:07:59.498627062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d2m7b,Uid:f964977e-1efd-4336-bed4-9aaaf25a614d,Namespace:kube-system,Attempt:0,} returns sandbox id \"540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e\"" Sep 12 17:07:59.516661 containerd[2017]: time="2025-09-12T17:07:59.516593654Z" level=info msg="CreateContainer within sandbox \"540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:07:59.574902 containerd[2017]: time="2025-09-12T17:07:59.574824242Z" level=info msg="Container a319e79881a5cac99f17f76c0ee0d36129fa093c45ee129e526d93cd00f51e34: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:07:59.605906 containerd[2017]: time="2025-09-12T17:07:59.605837126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db7dcb6-hwncn,Uid:c20d52c1-4f15-4e38-97b6-f025b6331f9b,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:07:59.609729 containerd[2017]: time="2025-09-12T17:07:59.609639278Z" level=info msg="CreateContainer within sandbox \"540660450e0914d225a1dc2eaeb596ffd8189b964c458f7b356c9be034718a7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a319e79881a5cac99f17f76c0ee0d36129fa093c45ee129e526d93cd00f51e34\"" Sep 12 17:07:59.618823 containerd[2017]: time="2025-09-12T17:07:59.618253911Z" level=info msg="StartContainer for \"a319e79881a5cac99f17f76c0ee0d36129fa093c45ee129e526d93cd00f51e34\"" Sep 12 17:07:59.625860 containerd[2017]: time="2025-09-12T17:07:59.625735299Z" level=info msg="connecting to shim a319e79881a5cac99f17f76c0ee0d36129fa093c45ee129e526d93cd00f51e34" address="unix:///run/containerd/s/31540582fc1d1c6ac1d7951e2244cef39846304c3cff68d09e02462ff6b5479a" protocol=ttrpc version=3 Sep 12 17:07:59.664101 systemd[1]: Started cri-containerd-c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac.scope - libcontainer container c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac. Sep 12 17:07:59.739480 systemd[1]: Started sshd@9-172.31.16.146:22-139.178.68.195:55998.service - OpenSSH per-connection server daemon (139.178.68.195:55998). Sep 12 17:07:59.803441 systemd-networkd[1897]: vxlan.calico: Gained IPv6LL Sep 12 17:07:59.885487 systemd[1]: Started cri-containerd-a319e79881a5cac99f17f76c0ee0d36129fa093c45ee129e526d93cd00f51e34.scope - libcontainer container a319e79881a5cac99f17f76c0ee0d36129fa093c45ee129e526d93cd00f51e34. Sep 12 17:08:00.055268 sshd[5122]: Accepted publickey for core from 139.178.68.195 port 55998 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:00.062791 containerd[2017]: time="2025-09-12T17:08:00.060651733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:00.066144 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:00.069435 containerd[2017]: time="2025-09-12T17:08:00.068598169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:08:00.072011 containerd[2017]: time="2025-09-12T17:08:00.071957245Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:00.089595 systemd-logind[1992]: New session 10 of user core. Sep 12 17:08:00.095884 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:08:00.114495 containerd[2017]: time="2025-09-12T17:08:00.107750773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:00.114495 containerd[2017]: time="2025-09-12T17:08:00.113362033Z" level=info msg="StartContainer for \"a319e79881a5cac99f17f76c0ee0d36129fa093c45ee129e526d93cd00f51e34\" returns successfully" Sep 12 17:08:00.121794 containerd[2017]: time="2025-09-12T17:08:00.121269757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.91821483s" Sep 12 17:08:00.121794 containerd[2017]: time="2025-09-12T17:08:00.121330645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:08:00.136594 containerd[2017]: time="2025-09-12T17:08:00.136431709Z" level=info msg="CreateContainer within sandbox \"de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:08:00.161447 containerd[2017]: time="2025-09-12T17:08:00.161382085Z" level=info msg="Container d3d5eb7b24a5b5729b0a0bb3fd1a9f1acc3eaff06ba5e48ce3fe7d38ebf28ffa: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:00.210835 containerd[2017]: time="2025-09-12T17:08:00.210163741Z" level=info msg="CreateContainer within sandbox \"de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d3d5eb7b24a5b5729b0a0bb3fd1a9f1acc3eaff06ba5e48ce3fe7d38ebf28ffa\"" Sep 12 17:08:00.216116 containerd[2017]: time="2025-09-12T17:08:00.216069770Z" level=info msg="StartContainer for \"d3d5eb7b24a5b5729b0a0bb3fd1a9f1acc3eaff06ba5e48ce3fe7d38ebf28ffa\"" Sep 12 17:08:00.224969 containerd[2017]: time="2025-09-12T17:08:00.224910770Z" level=info msg="connecting to shim d3d5eb7b24a5b5729b0a0bb3fd1a9f1acc3eaff06ba5e48ce3fe7d38ebf28ffa" address="unix:///run/containerd/s/70acc44b629774315148fae603325acd6b64723460e7940a04fdd3604de479a0" protocol=ttrpc version=3 Sep 12 17:08:00.253221 systemd-networkd[1897]: cali34a0a1721ba: Gained IPv6LL Sep 12 17:08:00.349095 systemd[1]: Started cri-containerd-d3d5eb7b24a5b5729b0a0bb3fd1a9f1acc3eaff06ba5e48ce3fe7d38ebf28ffa.scope - libcontainer container d3d5eb7b24a5b5729b0a0bb3fd1a9f1acc3eaff06ba5e48ce3fe7d38ebf28ffa. Sep 12 17:08:00.443385 systemd-networkd[1897]: cali0590857b47d: Gained IPv6LL Sep 12 17:08:00.462823 containerd[2017]: time="2025-09-12T17:08:00.462575871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-wgdrv,Uid:3c3f65ab-245e-480f-be16-5f1c4c67a27c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac\"" Sep 12 17:08:00.472901 containerd[2017]: time="2025-09-12T17:08:00.472758819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:08:00.612280 systemd-networkd[1897]: cali5baeca58989: Link UP Sep 12 17:08:00.617107 systemd-networkd[1897]: cali5baeca58989: Gained carrier Sep 12 17:08:00.646979 sshd[5169]: Connection closed by 139.178.68.195 port 55998 Sep 12 17:08:00.648571 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:00.664222 systemd[1]: sshd@9-172.31.16.146:22-139.178.68.195:55998.service: Deactivated successfully. Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.133 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0 calico-apiserver-6c7db7dcb6- calico-apiserver c20d52c1-4f15-4e38-97b6-f025b6331f9b 899 0 2025-09-12 17:07:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7db7dcb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-146 calico-apiserver-6c7db7dcb6-hwncn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5baeca58989 [] [] }} ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.134 [INFO][5104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.367 [INFO][5172] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" HandleID="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Workload="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.368 [INFO][5172] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" HandleID="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Workload="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332330), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-146", "pod":"calico-apiserver-6c7db7dcb6-hwncn", "timestamp":"2025-09-12 17:08:00.366463526 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.368 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.369 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.369 [INFO][5172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.433 [INFO][5172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.461 [INFO][5172] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.488 [INFO][5172] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.494 [INFO][5172] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.512 [INFO][5172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.513 [INFO][5172] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.523 [INFO][5172] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.546 [INFO][5172] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.577 [INFO][5172] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.68/26] block=192.168.18.64/26 handle="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.577 [INFO][5172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.68/26] handle="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" host="ip-172-31-16-146" Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.578 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:08:00.670348 containerd[2017]: 2025-09-12 17:08:00.579 [INFO][5172] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.68/26] IPv6=[] ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" HandleID="k8s-pod-network.cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Workload="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" Sep 12 17:08:00.681112 containerd[2017]: 2025-09-12 17:08:00.585 [INFO][5104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0", GenerateName:"calico-apiserver-6c7db7dcb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c20d52c1-4f15-4e38-97b6-f025b6331f9b", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7db7dcb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"calico-apiserver-6c7db7dcb6-hwncn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5baeca58989", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:00.681112 containerd[2017]: 2025-09-12 17:08:00.586 [INFO][5104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.68/32] ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" Sep 12 17:08:00.681112 containerd[2017]: 2025-09-12 17:08:00.586 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5baeca58989 ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" Sep 12 17:08:00.681112 containerd[2017]: 2025-09-12 17:08:00.622 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" Sep 12 17:08:00.681112 containerd[2017]: 2025-09-12 17:08:00.624 [INFO][5104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0", GenerateName:"calico-apiserver-6c7db7dcb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c20d52c1-4f15-4e38-97b6-f025b6331f9b", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7db7dcb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc", Pod:"calico-apiserver-6c7db7dcb6-hwncn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5baeca58989", MAC:"82:d6:f5:0e:cd:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:00.681112 containerd[2017]: 2025-09-12 17:08:00.655 [INFO][5104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db7dcb6-hwncn" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6c7db7dcb6--hwncn-eth0" Sep 12 17:08:00.677285 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:08:00.688669 systemd-logind[1992]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:08:00.694375 systemd-logind[1992]: Removed session 10. Sep 12 17:08:00.734563 containerd[2017]: time="2025-09-12T17:08:00.734026120Z" level=info msg="connecting to shim cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc" address="unix:///run/containerd/s/62762d06af7e2fcb08c910057fb2bd09874c14b7e9ca2b9d9889ef5729714ed5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:08:00.873156 systemd[1]: Started cri-containerd-cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc.scope - libcontainer container cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc. Sep 12 17:08:00.902064 containerd[2017]: time="2025-09-12T17:08:00.901979717Z" level=info msg="StartContainer for \"d3d5eb7b24a5b5729b0a0bb3fd1a9f1acc3eaff06ba5e48ce3fe7d38ebf28ffa\" returns successfully" Sep 12 17:08:01.037216 containerd[2017]: time="2025-09-12T17:08:01.037156322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db7dcb6-hwncn,Uid:c20d52c1-4f15-4e38-97b6-f025b6331f9b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc\"" Sep 12 17:08:01.589209 containerd[2017]: time="2025-09-12T17:08:01.589038664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796cbb8599-ddptz,Uid:d403a1e9-6639-4e3b-948a-4d6dfadcb895,Namespace:calico-system,Attempt:0,}" Sep 12 17:08:01.788088 systemd-networkd[1897]: cali5baeca58989: Gained IPv6LL Sep 12 17:08:01.867400 systemd-networkd[1897]: calib3e137ad5bd: Link UP Sep 12 17:08:01.869494 systemd-networkd[1897]: calib3e137ad5bd: Gained carrier Sep 12 17:08:01.899982 kubelet[3645]: I0912 17:08:01.899674 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d2m7b" podStartSLOduration=51.899644362 podStartE2EDuration="51.899644362s" podCreationTimestamp="2025-09-12 17:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:01.085935194 +0000 UTC m=+55.804742618" watchObservedRunningTime="2025-09-12 17:08:01.899644362 +0000 UTC m=+56.618451738" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.669 [INFO][5299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0 calico-kube-controllers-796cbb8599- calico-system d403a1e9-6639-4e3b-948a-4d6dfadcb895 898 0 2025-09-12 17:07:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:796cbb8599 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-146 calico-kube-controllers-796cbb8599-ddptz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib3e137ad5bd [] [] }} ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.669 [INFO][5299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.736 [INFO][5312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" HandleID="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Workload="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.737 [INFO][5312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" HandleID="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Workload="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c17d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-146", "pod":"calico-kube-controllers-796cbb8599-ddptz", "timestamp":"2025-09-12 17:08:01.736444061 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.737 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.738 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.738 [INFO][5312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.762 [INFO][5312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.774 [INFO][5312] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.784 [INFO][5312] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.792 [INFO][5312] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.798 [INFO][5312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.798 [INFO][5312] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.808 [INFO][5312] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600 Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.826 [INFO][5312] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.845 [INFO][5312] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.69/26] block=192.168.18.64/26 handle="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.846 [INFO][5312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.69/26] handle="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" host="ip-172-31-16-146" Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.846 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:08:01.907805 containerd[2017]: 2025-09-12 17:08:01.846 [INFO][5312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.69/26] IPv6=[] ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" HandleID="k8s-pod-network.a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Workload="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" Sep 12 17:08:01.912082 containerd[2017]: 2025-09-12 17:08:01.852 [INFO][5299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0", GenerateName:"calico-kube-controllers-796cbb8599-", Namespace:"calico-system", SelfLink:"", UID:"d403a1e9-6639-4e3b-948a-4d6dfadcb895", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796cbb8599", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"calico-kube-controllers-796cbb8599-ddptz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3e137ad5bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:01.912082 containerd[2017]: 2025-09-12 17:08:01.853 [INFO][5299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.69/32] ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" Sep 12 17:08:01.912082 containerd[2017]: 2025-09-12 17:08:01.853 [INFO][5299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3e137ad5bd ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" Sep 12 17:08:01.912082 containerd[2017]: 2025-09-12 17:08:01.871 [INFO][5299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" Sep 12 17:08:01.912082 containerd[2017]: 2025-09-12 17:08:01.872 [INFO][5299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0", GenerateName:"calico-kube-controllers-796cbb8599-", Namespace:"calico-system", SelfLink:"", UID:"d403a1e9-6639-4e3b-948a-4d6dfadcb895", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796cbb8599", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600", Pod:"calico-kube-controllers-796cbb8599-ddptz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3e137ad5bd", MAC:"fa:a5:b4:68:84:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:01.912082 containerd[2017]: 2025-09-12 17:08:01.897 [INFO][5299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" Namespace="calico-system" Pod="calico-kube-controllers-796cbb8599-ddptz" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--kube--controllers--796cbb8599--ddptz-eth0" Sep 12 17:08:01.985117 containerd[2017]: time="2025-09-12T17:08:01.984510846Z" level=info msg="connecting to shim a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600" address="unix:///run/containerd/s/198b6291cc90a411a15d061099e3c79300d8aa1ead648a3c44d1b4d603f9151d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:08:02.072683 systemd[1]: Started cri-containerd-a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600.scope - libcontainer container a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600. Sep 12 17:08:02.281523 containerd[2017]: time="2025-09-12T17:08:02.281456524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796cbb8599-ddptz,Uid:d403a1e9-6639-4e3b-948a-4d6dfadcb895,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600\"" Sep 12 17:08:02.590028 containerd[2017]: time="2025-09-12T17:08:02.589548341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-9xsvp,Uid:d9593beb-c142-41ba-95c3-de6f41b7c5f1,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:08:02.946022 systemd-networkd[1897]: cali38112029d19: Link UP Sep 12 17:08:02.950418 systemd-networkd[1897]: cali38112029d19: Gained carrier Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.707 [INFO][5379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0 calico-apiserver-6b56fc6589- calico-apiserver d9593beb-c142-41ba-95c3-de6f41b7c5f1 897 0 2025-09-12 17:07:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b56fc6589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-146 calico-apiserver-6b56fc6589-9xsvp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38112029d19 [] [] }} ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.708 [INFO][5379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.801 [INFO][5391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.801 [INFO][5391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000363700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-146", "pod":"calico-apiserver-6b56fc6589-9xsvp", "timestamp":"2025-09-12 17:08:02.801059718 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.802 [INFO][5391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.802 [INFO][5391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.802 [INFO][5391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.827 [INFO][5391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.841 [INFO][5391] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.859 [INFO][5391] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.867 [INFO][5391] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.877 [INFO][5391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.877 [INFO][5391] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.882 [INFO][5391] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.897 [INFO][5391] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.923 [INFO][5391] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.70/26] block=192.168.18.64/26 handle="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.924 [INFO][5391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.70/26] handle="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" host="ip-172-31-16-146" Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.925 [INFO][5391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:08:02.989138 containerd[2017]: 2025-09-12 17:08:02.926 [INFO][5391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.70/26] IPv6=[] ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:02.990590 containerd[2017]: 2025-09-12 17:08:02.933 [INFO][5379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0", GenerateName:"calico-apiserver-6b56fc6589-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9593beb-c142-41ba-95c3-de6f41b7c5f1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56fc6589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"calico-apiserver-6b56fc6589-9xsvp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38112029d19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:02.990590 containerd[2017]: 2025-09-12 17:08:02.933 [INFO][5379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.70/32] ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:02.990590 containerd[2017]: 2025-09-12 17:08:02.933 [INFO][5379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38112029d19 ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:02.990590 containerd[2017]: 2025-09-12 17:08:02.949 [INFO][5379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:02.990590 containerd[2017]: 2025-09-12 17:08:02.950 [INFO][5379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0", GenerateName:"calico-apiserver-6b56fc6589-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9593beb-c142-41ba-95c3-de6f41b7c5f1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b56fc6589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb", Pod:"calico-apiserver-6b56fc6589-9xsvp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38112029d19", MAC:"da:af:37:80:20:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:02.990590 containerd[2017]: 2025-09-12 17:08:02.980 [INFO][5379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b56fc6589-9xsvp" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:03.068346 containerd[2017]: time="2025-09-12T17:08:03.068171224Z" level=info msg="connecting to shim 3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" address="unix:///run/containerd/s/43ce46bc9aa9020cfc14d7be24bff0a036a95f62740b06ec19278a97f4b2c4e9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:08:03.131467 systemd-networkd[1897]: calib3e137ad5bd: Gained IPv6LL Sep 12 17:08:03.173884 systemd[1]: Started cri-containerd-3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb.scope - libcontainer container 3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb. Sep 12 17:08:03.307264 containerd[2017]: time="2025-09-12T17:08:03.307205597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b56fc6589-9xsvp,Uid:d9593beb-c142-41ba-95c3-de6f41b7c5f1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\"" Sep 12 17:08:03.597503 containerd[2017]: time="2025-09-12T17:08:03.597344766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2lt9,Uid:7dba5676-b81c-4c99-a964-12a04841c7f1,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:03.598204 containerd[2017]: time="2025-09-12T17:08:03.598136610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-drgp6,Uid:3451646a-5365-4f0c-8470-08bd6eac7042,Namespace:calico-system,Attempt:0,}" Sep 12 17:08:03.601069 containerd[2017]: time="2025-09-12T17:08:03.598703574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kdphf,Uid:8ac2608c-1fe0-4dc8-a918-dfae01ff6391,Namespace:calico-system,Attempt:0,}" Sep 12 17:08:04.156033 systemd-networkd[1897]: cali38112029d19: Gained IPv6LL Sep 12 17:08:04.333115 systemd-networkd[1897]: cali949a95bce94: Link UP Sep 12 17:08:04.337954 systemd-networkd[1897]: cali949a95bce94: Gained carrier Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:03.832 [INFO][5456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0 coredns-674b8bbfcf- kube-system 7dba5676-b81c-4c99-a964-12a04841c7f1 896 0 2025-09-12 17:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-146 coredns-674b8bbfcf-b2lt9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali949a95bce94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:03.832 [INFO][5456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.098 [INFO][5494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" HandleID="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Workload="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.100 [INFO][5494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" HandleID="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Workload="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-146", "pod":"coredns-674b8bbfcf-b2lt9", "timestamp":"2025-09-12 17:08:04.098947229 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.100 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.100 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.100 [INFO][5494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.146 [INFO][5494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.166 [INFO][5494] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.187 [INFO][5494] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.196 [INFO][5494] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.208 [INFO][5494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.208 [INFO][5494] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.216 [INFO][5494] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8 Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.234 [INFO][5494] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.283 [INFO][5494] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.71/26] block=192.168.18.64/26 handle="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.285 [INFO][5494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.71/26] handle="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" host="ip-172-31-16-146" Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.286 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:08:04.433449 containerd[2017]: 2025-09-12 17:08:04.288 [INFO][5494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.71/26] IPv6=[] ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" HandleID="k8s-pod-network.00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Workload="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" Sep 12 17:08:04.437581 containerd[2017]: 2025-09-12 17:08:04.308 [INFO][5456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7dba5676-b81c-4c99-a964-12a04841c7f1", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"coredns-674b8bbfcf-b2lt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali949a95bce94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:04.437581 containerd[2017]: 2025-09-12 17:08:04.309 [INFO][5456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.71/32] ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" Sep 12 17:08:04.437581 containerd[2017]: 2025-09-12 17:08:04.310 [INFO][5456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali949a95bce94 ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" Sep 12 17:08:04.437581 containerd[2017]: 2025-09-12 17:08:04.340 [INFO][5456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" Sep 12 17:08:04.437581 containerd[2017]: 2025-09-12 17:08:04.344 [INFO][5456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7dba5676-b81c-4c99-a964-12a04841c7f1", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8", Pod:"coredns-674b8bbfcf-b2lt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali949a95bce94", MAC:"ae:ca:48:c3:81:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:04.437581 containerd[2017]: 2025-09-12 17:08:04.392 [INFO][5456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2lt9" WorkloadEndpoint="ip--172--31--16--146-k8s-coredns--674b8bbfcf--b2lt9-eth0" Sep 12 17:08:04.521054 containerd[2017]: time="2025-09-12T17:08:04.520966555Z" level=info msg="connecting to shim 00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8" address="unix:///run/containerd/s/03c082d6c0acc66ca915faeb2d72d74e8ce85681acacf29f71ef8901dffe2eea" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:08:04.630969 systemd-networkd[1897]: calieb097369b73: Link UP Sep 12 17:08:04.640846 systemd-networkd[1897]: calieb097369b73: Gained carrier Sep 12 17:08:04.719073 systemd[1]: Started cri-containerd-00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8.scope - libcontainer container 00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8. Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:03.914 [INFO][5462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0 csi-node-driver- calico-system 3451646a-5365-4f0c-8470-08bd6eac7042 785 0 2025-09-12 17:07:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-146 csi-node-driver-drgp6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieb097369b73 [] [] }} ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:03.920 [INFO][5462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.200 [INFO][5501] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" HandleID="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Workload="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.201 [INFO][5501] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" HandleID="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Workload="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003abb50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-146", "pod":"csi-node-driver-drgp6", "timestamp":"2025-09-12 17:08:04.200683049 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.201 [INFO][5501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.287 [INFO][5501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.287 [INFO][5501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.370 [INFO][5501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.396 [INFO][5501] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.439 [INFO][5501] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.449 [INFO][5501] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.465 [INFO][5501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.465 [INFO][5501] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.476 [INFO][5501] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.513 [INFO][5501] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.550 [INFO][5501] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.72/26] block=192.168.18.64/26 handle="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.552 [INFO][5501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.72/26] handle="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" host="ip-172-31-16-146" Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.552 [INFO][5501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:08:04.745377 containerd[2017]: 2025-09-12 17:08:04.554 [INFO][5501] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.72/26] IPv6=[] ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" HandleID="k8s-pod-network.34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Workload="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" Sep 12 17:08:04.748358 containerd[2017]: 2025-09-12 17:08:04.600 [INFO][5462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3451646a-5365-4f0c-8470-08bd6eac7042", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"csi-node-driver-drgp6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb097369b73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:04.748358 containerd[2017]: 2025-09-12 17:08:04.600 [INFO][5462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.72/32] ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" Sep 12 17:08:04.748358 containerd[2017]: 2025-09-12 17:08:04.600 [INFO][5462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb097369b73 ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" Sep 12 17:08:04.748358 containerd[2017]: 2025-09-12 17:08:04.666 [INFO][5462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" Sep 12 17:08:04.748358 containerd[2017]: 2025-09-12 17:08:04.669 [INFO][5462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3451646a-5365-4f0c-8470-08bd6eac7042", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c", Pod:"csi-node-driver-drgp6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb097369b73", MAC:"2a:3f:63:ab:c1:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:04.748358 containerd[2017]: 2025-09-12 17:08:04.728 [INFO][5462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" Namespace="calico-system" Pod="csi-node-driver-drgp6" WorkloadEndpoint="ip--172--31--16--146-k8s-csi--node--driver--drgp6-eth0" Sep 12 17:08:04.881077 containerd[2017]: time="2025-09-12T17:08:04.881018169Z" level=info msg="connecting to shim 34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c" address="unix:///run/containerd/s/0e05472364e841a2dc1dfd67813ced545cbb1cf25a6ba02e823d40038fff48f9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:08:04.931862 systemd-networkd[1897]: caliabaf8b395ba: Link UP Sep 12 17:08:04.938319 systemd-networkd[1897]: caliabaf8b395ba: Gained carrier Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:03.936 [INFO][5473] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0 goldmane-54d579b49d- calico-system 8ac2608c-1fe0-4dc8-a918-dfae01ff6391 902 0 2025-09-12 17:07:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-146 goldmane-54d579b49d-kdphf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliabaf8b395ba [] [] }} ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:03.937 [INFO][5473] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.213 [INFO][5503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" HandleID="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Workload="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.216 [INFO][5503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" HandleID="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Workload="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000306d10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-146", "pod":"goldmane-54d579b49d-kdphf", "timestamp":"2025-09-12 17:08:04.213086813 +0000 UTC"}, Hostname:"ip-172-31-16-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.216 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.554 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.556 [INFO][5503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-146' Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.593 [INFO][5503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.667 [INFO][5503] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.743 [INFO][5503] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.758 [INFO][5503] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.789 [INFO][5503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.790 [INFO][5503] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.801 [INFO][5503] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760 Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.827 [INFO][5503] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.861 [INFO][5503] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.73/26] block=192.168.18.64/26 handle="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.862 [INFO][5503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.73/26] handle="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" host="ip-172-31-16-146" Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.863 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:08:05.000735 containerd[2017]: 2025-09-12 17:08:04.863 [INFO][5503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.73/26] IPv6=[] ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" HandleID="k8s-pod-network.7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Workload="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" Sep 12 17:08:05.004970 containerd[2017]: 2025-09-12 17:08:04.889 [INFO][5473] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8ac2608c-1fe0-4dc8-a918-dfae01ff6391", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"", Pod:"goldmane-54d579b49d-kdphf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliabaf8b395ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:05.004970 containerd[2017]: 2025-09-12 17:08:04.891 [INFO][5473] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.73/32] ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" Sep 12 17:08:05.004970 containerd[2017]: 2025-09-12 17:08:04.893 [INFO][5473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabaf8b395ba ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" Sep 12 17:08:05.004970 containerd[2017]: 2025-09-12 17:08:04.948 [INFO][5473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" Sep 12 17:08:05.004970 containerd[2017]: 2025-09-12 17:08:04.963 [INFO][5473] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8ac2608c-1fe0-4dc8-a918-dfae01ff6391", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-146", ContainerID:"7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760", Pod:"goldmane-54d579b49d-kdphf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliabaf8b395ba", MAC:"b2:b3:43:06:27:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:08:05.004970 containerd[2017]: 2025-09-12 17:08:04.990 [INFO][5473] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" Namespace="calico-system" Pod="goldmane-54d579b49d-kdphf" WorkloadEndpoint="ip--172--31--16--146-k8s-goldmane--54d579b49d--kdphf-eth0" Sep 12 17:08:05.034692 systemd[1]: Started cri-containerd-34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c.scope - libcontainer container 34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c. Sep 12 17:08:05.073494 containerd[2017]: time="2025-09-12T17:08:05.073373130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2lt9,Uid:7dba5676-b81c-4c99-a964-12a04841c7f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8\"" Sep 12 17:08:05.111010 containerd[2017]: time="2025-09-12T17:08:05.110926314Z" level=info msg="CreateContainer within sandbox \"00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:08:05.137738 containerd[2017]: time="2025-09-12T17:08:05.137664186Z" level=info msg="connecting to shim 7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760" address="unix:///run/containerd/s/84572901c3a18c97f5410b5e52ab8e17e4aa6ef132fa329285c933b0e2af8773" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:08:05.158596 containerd[2017]: time="2025-09-12T17:08:05.158536362Z" level=info msg="Container b113ea72c2fbea4c7571701c6af28d9f826cbe6354ca42e79c916277431ec60b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:05.190136 containerd[2017]: time="2025-09-12T17:08:05.189545910Z" level=info msg="CreateContainer within sandbox \"00b7194b023a3796ccbb2e2ceb56e7532687bce014aee003603dff06e952e7d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b113ea72c2fbea4c7571701c6af28d9f826cbe6354ca42e79c916277431ec60b\"" Sep 12 17:08:05.194987 containerd[2017]: time="2025-09-12T17:08:05.194730018Z" level=info msg="StartContainer for \"b113ea72c2fbea4c7571701c6af28d9f826cbe6354ca42e79c916277431ec60b\"" Sep 12 17:08:05.203794 containerd[2017]: time="2025-09-12T17:08:05.203563506Z" level=info msg="connecting to shim b113ea72c2fbea4c7571701c6af28d9f826cbe6354ca42e79c916277431ec60b" address="unix:///run/containerd/s/03c082d6c0acc66ca915faeb2d72d74e8ce85681acacf29f71ef8901dffe2eea" protocol=ttrpc version=3 Sep 12 17:08:05.316289 systemd[1]: Started cri-containerd-7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760.scope - libcontainer container 7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760. Sep 12 17:08:05.392140 systemd[1]: Started cri-containerd-b113ea72c2fbea4c7571701c6af28d9f826cbe6354ca42e79c916277431ec60b.scope - libcontainer container b113ea72c2fbea4c7571701c6af28d9f826cbe6354ca42e79c916277431ec60b. Sep 12 17:08:05.515644 containerd[2017]: time="2025-09-12T17:08:05.514960340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-drgp6,Uid:3451646a-5365-4f0c-8470-08bd6eac7042,Namespace:calico-system,Attempt:0,} returns sandbox id \"34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c\"" Sep 12 17:08:05.630787 containerd[2017]: time="2025-09-12T17:08:05.630597824Z" level=info msg="StartContainer for \"b113ea72c2fbea4c7571701c6af28d9f826cbe6354ca42e79c916277431ec60b\" returns successfully" Sep 12 17:08:05.698097 systemd[1]: Started sshd@10-172.31.16.146:22-139.178.68.195:37592.service - OpenSSH per-connection server daemon (139.178.68.195:37592). Sep 12 17:08:05.872691 containerd[2017]: time="2025-09-12T17:08:05.872111326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kdphf,Uid:8ac2608c-1fe0-4dc8-a918-dfae01ff6391,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760\"" Sep 12 17:08:05.960258 sshd[5722]: Accepted publickey for core from 139.178.68.195 port 37592 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:05.964257 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:05.981894 systemd-logind[1992]: New session 11 of user core. Sep 12 17:08:05.988102 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:08:05.993805 containerd[2017]: time="2025-09-12T17:08:05.993582754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:05.999589 containerd[2017]: time="2025-09-12T17:08:05.999513286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:08:06.001707 containerd[2017]: time="2025-09-12T17:08:06.001107750Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:06.010873 containerd[2017]: time="2025-09-12T17:08:06.010726266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:06.014024 containerd[2017]: time="2025-09-12T17:08:06.013743186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 5.539325775s" Sep 12 17:08:06.014574 containerd[2017]: time="2025-09-12T17:08:06.014536410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:08:06.018625 containerd[2017]: time="2025-09-12T17:08:06.018560718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:08:06.027733 containerd[2017]: time="2025-09-12T17:08:06.026166906Z" level=info msg="CreateContainer within sandbox \"c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:08:06.046511 containerd[2017]: time="2025-09-12T17:08:06.046385862Z" level=info msg="Container e0a52cb162cc2d5478c006890062a869bb56cc69d77700f62aeaaf14a98fc5fc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:06.062549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985579702.mount: Deactivated successfully. Sep 12 17:08:06.078114 containerd[2017]: time="2025-09-12T17:08:06.078047743Z" level=info msg="CreateContainer within sandbox \"c31e488ffc6dbc540d7a2e4907bd54159733458466c6c655314c95696fb166ac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e0a52cb162cc2d5478c006890062a869bb56cc69d77700f62aeaaf14a98fc5fc\"" Sep 12 17:08:06.080264 containerd[2017]: time="2025-09-12T17:08:06.079955347Z" level=info msg="StartContainer for \"e0a52cb162cc2d5478c006890062a869bb56cc69d77700f62aeaaf14a98fc5fc\"" Sep 12 17:08:06.084190 containerd[2017]: time="2025-09-12T17:08:06.084121639Z" level=info msg="connecting to shim e0a52cb162cc2d5478c006890062a869bb56cc69d77700f62aeaaf14a98fc5fc" address="unix:///run/containerd/s/b64dd1e80f9eabdb0302fb69f281a0f5b847193e64ed238a1c75a22b102f732f" protocol=ttrpc version=3 Sep 12 17:08:06.140336 systemd-networkd[1897]: cali949a95bce94: Gained IPv6LL Sep 12 17:08:06.211400 systemd[1]: Started cri-containerd-e0a52cb162cc2d5478c006890062a869bb56cc69d77700f62aeaaf14a98fc5fc.scope - libcontainer container e0a52cb162cc2d5478c006890062a869bb56cc69d77700f62aeaaf14a98fc5fc. Sep 12 17:08:06.249348 kubelet[3645]: I0912 17:08:06.248883 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b2lt9" podStartSLOduration=56.248857987 podStartE2EDuration="56.248857987s" podCreationTimestamp="2025-09-12 17:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:06.246013927 +0000 UTC m=+60.964821327" watchObservedRunningTime="2025-09-12 17:08:06.248857987 +0000 UTC m=+60.967665387" Sep 12 17:08:06.521837 sshd[5736]: Connection closed by 139.178.68.195 port 37592 Sep 12 17:08:06.520951 sshd-session[5722]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:06.524946 systemd-networkd[1897]: calieb097369b73: Gained IPv6LL Sep 12 17:08:06.541673 systemd[1]: sshd@10-172.31.16.146:22-139.178.68.195:37592.service: Deactivated successfully. Sep 12 17:08:06.553352 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:08:06.561849 systemd-logind[1992]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:08:06.569434 systemd-logind[1992]: Removed session 11. Sep 12 17:08:06.639453 containerd[2017]: time="2025-09-12T17:08:06.639386073Z" level=info msg="StartContainer for \"e0a52cb162cc2d5478c006890062a869bb56cc69d77700f62aeaaf14a98fc5fc\" returns successfully" Sep 12 17:08:06.907232 systemd-networkd[1897]: caliabaf8b395ba: Gained IPv6LL Sep 12 17:08:07.253229 kubelet[3645]: I0912 17:08:07.251195 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b56fc6589-wgdrv" podStartSLOduration=35.704019485 podStartE2EDuration="41.25103168s" podCreationTimestamp="2025-09-12 17:07:26 +0000 UTC" firstStartedPulling="2025-09-12 17:08:00.471119463 +0000 UTC m=+55.189926827" lastFinishedPulling="2025-09-12 17:08:06.018131586 +0000 UTC m=+60.736939022" observedRunningTime="2025-09-12 17:08:07.246470396 +0000 UTC m=+61.965277796" watchObservedRunningTime="2025-09-12 17:08:07.25103168 +0000 UTC m=+61.969839056" Sep 12 17:08:08.220837 kubelet[3645]: I0912 17:08:08.220056 3645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:08:08.895980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980431378.mount: Deactivated successfully. Sep 12 17:08:08.975799 containerd[2017]: time="2025-09-12T17:08:08.975701065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:08.979216 containerd[2017]: time="2025-09-12T17:08:08.978735889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:08:08.981669 containerd[2017]: time="2025-09-12T17:08:08.981496501Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:08.989095 containerd[2017]: time="2025-09-12T17:08:08.989039821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:08.989813 containerd[2017]: time="2025-09-12T17:08:08.989633077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.971004547s" Sep 12 17:08:08.989813 containerd[2017]: time="2025-09-12T17:08:08.989694037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:08:08.992639 containerd[2017]: time="2025-09-12T17:08:08.992576989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:08:09.000789 containerd[2017]: time="2025-09-12T17:08:09.000647217Z" level=info msg="CreateContainer within sandbox \"de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:08:09.020846 containerd[2017]: time="2025-09-12T17:08:09.020480013Z" level=info msg="Container af9f1fd83e17e03ded806aa7afdb115b966feca4c2397a13acfbc8a11db54397: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:09.046692 containerd[2017]: time="2025-09-12T17:08:09.046612197Z" level=info msg="CreateContainer within sandbox \"de5136307c4c14da71c0e52ee2597a8161238a2ed6813059d9902327e9a14a60\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"af9f1fd83e17e03ded806aa7afdb115b966feca4c2397a13acfbc8a11db54397\"" Sep 12 17:08:09.049784 containerd[2017]: time="2025-09-12T17:08:09.049680693Z" level=info msg="StartContainer for \"af9f1fd83e17e03ded806aa7afdb115b966feca4c2397a13acfbc8a11db54397\"" Sep 12 17:08:09.053175 containerd[2017]: time="2025-09-12T17:08:09.053102829Z" level=info msg="connecting to shim af9f1fd83e17e03ded806aa7afdb115b966feca4c2397a13acfbc8a11db54397" address="unix:///run/containerd/s/70acc44b629774315148fae603325acd6b64723460e7940a04fdd3604de479a0" protocol=ttrpc version=3 Sep 12 17:08:09.125218 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.18.64:123 Sep 12 17:08:09.125732 ntpd[1987]: Listen normally on 8 cali20892b4fd3e [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:08:09.125993 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.18.64:123 Sep 12 17:08:09.125993 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 8 cali20892b4fd3e [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:08:09.125993 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::64ab:63ff:fee3:882e%5]:123 Sep 12 17:08:09.125993 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 10 cali34a0a1721ba [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:08:09.125873 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::64ab:63ff:fee3:882e%5]:123 Sep 12 17:08:09.126260 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 11 cali0590857b47d [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:08:09.126260 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 12 cali5baeca58989 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:08:09.126260 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 13 calib3e137ad5bd [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:08:09.125943 ntpd[1987]: Listen normally on 10 cali34a0a1721ba [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:08:09.127451 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 14 cali38112029d19 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:08:09.127451 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 15 cali949a95bce94 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:08:09.127451 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 16 calieb097369b73 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:08:09.127451 ntpd[1987]: 12 Sep 17:08:09 ntpd[1987]: Listen normally on 17 caliabaf8b395ba [fe80::ecee:eeff:feee:eeee%15]:123 Sep 12 17:08:09.126011 ntpd[1987]: Listen normally on 11 cali0590857b47d [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:08:09.126077 ntpd[1987]: Listen normally on 12 cali5baeca58989 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:08:09.126144 ntpd[1987]: Listen normally on 13 calib3e137ad5bd [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:08:09.126207 ntpd[1987]: Listen normally on 14 cali38112029d19 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:08:09.126592 ntpd[1987]: Listen normally on 15 cali949a95bce94 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:08:09.126827 ntpd[1987]: Listen normally on 16 calieb097369b73 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:08:09.127038 ntpd[1987]: Listen normally on 17 caliabaf8b395ba [fe80::ecee:eeff:feee:eeee%15]:123 Sep 12 17:08:09.159168 systemd[1]: Started cri-containerd-af9f1fd83e17e03ded806aa7afdb115b966feca4c2397a13acfbc8a11db54397.scope - libcontainer container af9f1fd83e17e03ded806aa7afdb115b966feca4c2397a13acfbc8a11db54397. Sep 12 17:08:09.289325 containerd[2017]: time="2025-09-12T17:08:09.289251347Z" level=info msg="StartContainer for \"af9f1fd83e17e03ded806aa7afdb115b966feca4c2397a13acfbc8a11db54397\" returns successfully" Sep 12 17:08:09.404898 containerd[2017]: time="2025-09-12T17:08:09.404174783Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:09.406502 containerd[2017]: time="2025-09-12T17:08:09.406424351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:08:09.416926 containerd[2017]: time="2025-09-12T17:08:09.416687807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 424.046474ms" Sep 12 17:08:09.417457 containerd[2017]: time="2025-09-12T17:08:09.417288311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:08:09.422469 containerd[2017]: time="2025-09-12T17:08:09.421902647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:08:09.431279 containerd[2017]: time="2025-09-12T17:08:09.431172143Z" level=info msg="CreateContainer within sandbox \"cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:08:09.482137 containerd[2017]: time="2025-09-12T17:08:09.482069604Z" level=info msg="Container 8115d47813ec2119a704f9e4699cd66a2c2417d262a6aa4f8e122e0c441c6192: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:09.503196 containerd[2017]: time="2025-09-12T17:08:09.503026332Z" level=info msg="CreateContainer within sandbox \"cb3c8ccb42f0831d0d28b41245a0520eebef8aef847cb75be4477e9909c7b5dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8115d47813ec2119a704f9e4699cd66a2c2417d262a6aa4f8e122e0c441c6192\"" Sep 12 17:08:09.507799 containerd[2017]: time="2025-09-12T17:08:09.505171764Z" level=info msg="StartContainer for \"8115d47813ec2119a704f9e4699cd66a2c2417d262a6aa4f8e122e0c441c6192\"" Sep 12 17:08:09.511737 containerd[2017]: time="2025-09-12T17:08:09.510272400Z" level=info msg="connecting to shim 8115d47813ec2119a704f9e4699cd66a2c2417d262a6aa4f8e122e0c441c6192" address="unix:///run/containerd/s/62762d06af7e2fcb08c910057fb2bd09874c14b7e9ca2b9d9889ef5729714ed5" protocol=ttrpc version=3 Sep 12 17:08:09.553090 systemd[1]: Started cri-containerd-8115d47813ec2119a704f9e4699cd66a2c2417d262a6aa4f8e122e0c441c6192.scope - libcontainer container 8115d47813ec2119a704f9e4699cd66a2c2417d262a6aa4f8e122e0c441c6192. Sep 12 17:08:09.714552 containerd[2017]: time="2025-09-12T17:08:09.714017557Z" level=info msg="StartContainer for \"8115d47813ec2119a704f9e4699cd66a2c2417d262a6aa4f8e122e0c441c6192\" returns successfully" Sep 12 17:08:09.896752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524692992.mount: Deactivated successfully. Sep 12 17:08:10.290040 kubelet[3645]: I0912 17:08:10.289912 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b8b8fc866-jgbzs" podStartSLOduration=2.49933643 podStartE2EDuration="14.288946116s" podCreationTimestamp="2025-09-12 17:07:56 +0000 UTC" firstStartedPulling="2025-09-12 17:07:57.202330199 +0000 UTC m=+51.921137575" lastFinishedPulling="2025-09-12 17:08:08.991939885 +0000 UTC m=+63.710747261" observedRunningTime="2025-09-12 17:08:10.28745058 +0000 UTC m=+65.006257968" watchObservedRunningTime="2025-09-12 17:08:10.288946116 +0000 UTC m=+65.007753492" Sep 12 17:08:11.276832 kubelet[3645]: I0912 17:08:11.276566 3645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:08:11.559232 systemd[1]: Started sshd@11-172.31.16.146:22-139.178.68.195:53318.service - OpenSSH per-connection server daemon (139.178.68.195:53318). Sep 12 17:08:11.773625 sshd[5875]: Accepted publickey for core from 139.178.68.195 port 53318 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:11.777012 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:11.796369 systemd-logind[1992]: New session 12 of user core. Sep 12 17:08:11.803005 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:08:12.176331 sshd[5879]: Connection closed by 139.178.68.195 port 53318 Sep 12 17:08:12.176874 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:12.186563 systemd[1]: sshd@11-172.31.16.146:22-139.178.68.195:53318.service: Deactivated successfully. Sep 12 17:08:12.187097 systemd-logind[1992]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:08:12.198197 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:08:12.233203 systemd-logind[1992]: Removed session 12. Sep 12 17:08:12.233888 systemd[1]: Started sshd@12-172.31.16.146:22-139.178.68.195:53328.service - OpenSSH per-connection server daemon (139.178.68.195:53328). Sep 12 17:08:12.457943 sshd[5895]: Accepted publickey for core from 139.178.68.195 port 53328 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:12.462162 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:12.471868 systemd-logind[1992]: New session 13 of user core. Sep 12 17:08:12.484475 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:08:12.998881 sshd[5898]: Connection closed by 139.178.68.195 port 53328 Sep 12 17:08:12.999926 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:13.014230 systemd[1]: sshd@12-172.31.16.146:22-139.178.68.195:53328.service: Deactivated successfully. Sep 12 17:08:13.016795 systemd-logind[1992]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:08:13.022427 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:08:13.050268 systemd[1]: Started sshd@13-172.31.16.146:22-139.178.68.195:53334.service - OpenSSH per-connection server daemon (139.178.68.195:53334). Sep 12 17:08:13.062796 systemd-logind[1992]: Removed session 13. Sep 12 17:08:13.268936 sshd[5908]: Accepted publickey for core from 139.178.68.195 port 53334 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:13.273300 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:13.287584 systemd-logind[1992]: New session 14 of user core. Sep 12 17:08:13.294253 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:08:13.474910 kubelet[3645]: I0912 17:08:13.474723 3645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:08:13.539666 kubelet[3645]: I0912 17:08:13.539495 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c7db7dcb6-hwncn" podStartSLOduration=35.160378663 podStartE2EDuration="43.539473564s" podCreationTimestamp="2025-09-12 17:07:30 +0000 UTC" firstStartedPulling="2025-09-12 17:08:01.040894442 +0000 UTC m=+55.759701806" lastFinishedPulling="2025-09-12 17:08:09.419989343 +0000 UTC m=+64.138796707" observedRunningTime="2025-09-12 17:08:10.328751952 +0000 UTC m=+65.047559328" watchObservedRunningTime="2025-09-12 17:08:13.539473564 +0000 UTC m=+68.258280928" Sep 12 17:08:13.654299 sshd[5912]: Connection closed by 139.178.68.195 port 53334 Sep 12 17:08:13.655196 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:13.663093 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:08:13.667625 systemd[1]: sshd@13-172.31.16.146:22-139.178.68.195:53334.service: Deactivated successfully. Sep 12 17:08:13.682867 systemd-logind[1992]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:08:13.688014 systemd-logind[1992]: Removed session 14. Sep 12 17:08:15.982494 containerd[2017]: time="2025-09-12T17:08:15.982368680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:15.985904 containerd[2017]: time="2025-09-12T17:08:15.985326740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:08:15.990820 containerd[2017]: time="2025-09-12T17:08:15.988749212Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:15.993025 containerd[2017]: time="2025-09-12T17:08:15.992960072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:15.996247 containerd[2017]: time="2025-09-12T17:08:15.996169400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 6.572932017s" Sep 12 17:08:15.996471 containerd[2017]: time="2025-09-12T17:08:15.996437024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:08:16.000028 containerd[2017]: time="2025-09-12T17:08:15.999981200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:08:16.048112 containerd[2017]: time="2025-09-12T17:08:16.047662804Z" level=info msg="CreateContainer within sandbox \"a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:08:16.069107 containerd[2017]: time="2025-09-12T17:08:16.069033928Z" level=info msg="Container d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:16.085231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616557366.mount: Deactivated successfully. Sep 12 17:08:16.094672 containerd[2017]: time="2025-09-12T17:08:16.094610608Z" level=info msg="CreateContainer within sandbox \"a9eda854c1b513e8005caebc64f38834b2bba066748367d2a3eca19da7f4b600\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059\"" Sep 12 17:08:16.098613 containerd[2017]: time="2025-09-12T17:08:16.098547448Z" level=info msg="StartContainer for \"d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059\"" Sep 12 17:08:16.102672 containerd[2017]: time="2025-09-12T17:08:16.102585256Z" level=info msg="connecting to shim d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059" address="unix:///run/containerd/s/198b6291cc90a411a15d061099e3c79300d8aa1ead648a3c44d1b4d603f9151d" protocol=ttrpc version=3 Sep 12 17:08:16.162476 systemd[1]: Started cri-containerd-d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059.scope - libcontainer container d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059. Sep 12 17:08:16.419358 containerd[2017]: time="2025-09-12T17:08:16.419256090Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:16.421798 containerd[2017]: time="2025-09-12T17:08:16.421169490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:08:16.433103 containerd[2017]: time="2025-09-12T17:08:16.432499338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 432.196886ms" Sep 12 17:08:16.433103 containerd[2017]: time="2025-09-12T17:08:16.432731658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:08:16.436806 containerd[2017]: time="2025-09-12T17:08:16.436704090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:08:16.444904 containerd[2017]: time="2025-09-12T17:08:16.444762438Z" level=info msg="CreateContainer within sandbox \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:08:16.470073 containerd[2017]: time="2025-09-12T17:08:16.469935030Z" level=info msg="Container eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:16.474354 containerd[2017]: time="2025-09-12T17:08:16.474056346Z" level=info msg="StartContainer for \"d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059\" returns successfully" Sep 12 17:08:16.494922 containerd[2017]: time="2025-09-12T17:08:16.494762838Z" level=info msg="CreateContainer within sandbox \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\"" Sep 12 17:08:16.497672 containerd[2017]: time="2025-09-12T17:08:16.497610090Z" level=info msg="StartContainer for \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\"" Sep 12 17:08:16.504979 containerd[2017]: time="2025-09-12T17:08:16.504841746Z" level=info msg="connecting to shim eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e" address="unix:///run/containerd/s/43ce46bc9aa9020cfc14d7be24bff0a036a95f62740b06ec19278a97f4b2c4e9" protocol=ttrpc version=3 Sep 12 17:08:16.571103 systemd[1]: Started cri-containerd-eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e.scope - libcontainer container eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e. Sep 12 17:08:16.846750 containerd[2017]: time="2025-09-12T17:08:16.846587456Z" level=info msg="StartContainer for \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" returns successfully" Sep 12 17:08:17.344621 containerd[2017]: time="2025-09-12T17:08:17.343259995Z" level=info msg="StopContainer for \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" with timeout 30 (s)" Sep 12 17:08:17.346391 containerd[2017]: time="2025-09-12T17:08:17.346332451Z" level=info msg="Stop container \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" with signal terminated" Sep 12 17:08:17.365088 kubelet[3645]: I0912 17:08:17.364602 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-796cbb8599-ddptz" podStartSLOduration=26.651136147 podStartE2EDuration="40.364468543s" podCreationTimestamp="2025-09-12 17:07:37 +0000 UTC" firstStartedPulling="2025-09-12 17:08:02.286223284 +0000 UTC m=+57.005030648" lastFinishedPulling="2025-09-12 17:08:15.999555584 +0000 UTC m=+70.718363044" observedRunningTime="2025-09-12 17:08:17.354791779 +0000 UTC m=+72.073599167" watchObservedRunningTime="2025-09-12 17:08:17.364468543 +0000 UTC m=+72.083275907" Sep 12 17:08:17.465857 systemd[1]: cri-containerd-eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e.scope: Deactivated successfully. Sep 12 17:08:17.478154 containerd[2017]: time="2025-09-12T17:08:17.477972007Z" level=info msg="received exit event container_id:\"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" id:\"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" pid:5987 exit_status:1 exited_at:{seconds:1757696897 nanos:475254175}" Sep 12 17:08:17.478154 containerd[2017]: time="2025-09-12T17:08:17.478128859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" id:\"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" pid:5987 exit_status:1 exited_at:{seconds:1757696897 nanos:475254175}" Sep 12 17:08:17.519799 containerd[2017]: time="2025-09-12T17:08:17.518856319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059\" id:\"5a4c6312ba8d0b4d88bc78781ef371b01ab2dadb21d521dffa2f525222d1ca84\" pid:6035 exited_at:{seconds:1757696897 nanos:517995103}" Sep 12 17:08:17.560403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e-rootfs.mount: Deactivated successfully. Sep 12 17:08:17.578622 kubelet[3645]: I0912 17:08:17.578476 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b56fc6589-9xsvp" podStartSLOduration=38.454712143 podStartE2EDuration="51.57845414s" podCreationTimestamp="2025-09-12 17:07:26 +0000 UTC" firstStartedPulling="2025-09-12 17:08:03.311603621 +0000 UTC m=+58.030410985" lastFinishedPulling="2025-09-12 17:08:16.43534557 +0000 UTC m=+71.154152982" observedRunningTime="2025-09-12 17:08:17.435056071 +0000 UTC m=+72.153863471" watchObservedRunningTime="2025-09-12 17:08:17.57845414 +0000 UTC m=+72.297261516" Sep 12 17:08:17.804890 containerd[2017]: time="2025-09-12T17:08:17.804805293Z" level=info msg="StopContainer for \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" returns successfully" Sep 12 17:08:17.806104 containerd[2017]: time="2025-09-12T17:08:17.806032089Z" level=info msg="StopPodSandbox for \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\"" Sep 12 17:08:17.806267 containerd[2017]: time="2025-09-12T17:08:17.806175597Z" level=info msg="Container to stop \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:08:17.830359 systemd[1]: cri-containerd-3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb.scope: Deactivated successfully. Sep 12 17:08:17.839047 containerd[2017]: time="2025-09-12T17:08:17.838918965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" id:\"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" pid:5440 exit_status:137 exited_at:{seconds:1757696897 nanos:834340689}" Sep 12 17:08:17.934568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb-rootfs.mount: Deactivated successfully. Sep 12 17:08:17.937289 containerd[2017]: time="2025-09-12T17:08:17.936926218Z" level=info msg="shim disconnected" id=3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb namespace=k8s.io Sep 12 17:08:17.937289 containerd[2017]: time="2025-09-12T17:08:17.936985438Z" level=warning msg="cleaning up after shim disconnected" id=3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb namespace=k8s.io Sep 12 17:08:17.937289 containerd[2017]: time="2025-09-12T17:08:17.937041478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:08:18.027861 containerd[2017]: time="2025-09-12T17:08:18.027636930Z" level=info msg="received exit event sandbox_id:\"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" exit_status:137 exited_at:{seconds:1757696897 nanos:834340689}" Sep 12 17:08:18.037371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb-shm.mount: Deactivated successfully. Sep 12 17:08:18.133968 systemd-networkd[1897]: cali38112029d19: Link DOWN Sep 12 17:08:18.134737 systemd-networkd[1897]: cali38112029d19: Lost carrier Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.128 [INFO][6111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.130 [INFO][6111] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" iface="eth0" netns="/var/run/netns/cni-3a9dbded-0634-e26a-be99-0534f4a6872d" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.132 [INFO][6111] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" iface="eth0" netns="/var/run/netns/cni-3a9dbded-0634-e26a-be99-0534f4a6872d" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.143 [INFO][6111] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" after=13.084105ms iface="eth0" netns="/var/run/netns/cni-3a9dbded-0634-e26a-be99-0534f4a6872d" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.143 [INFO][6111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.143 [INFO][6111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.209 [INFO][6118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.210 [INFO][6118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.210 [INFO][6118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.290 [INFO][6118] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.290 [INFO][6118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.294 [INFO][6118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:08:18.301162 containerd[2017]: 2025-09-12 17:08:18.297 [INFO][6111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:08:18.302871 containerd[2017]: time="2025-09-12T17:08:18.302423767Z" level=info msg="TearDown network for sandbox \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" successfully" Sep 12 17:08:18.302871 containerd[2017]: time="2025-09-12T17:08:18.302474827Z" level=info msg="StopPodSandbox for \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" returns successfully" Sep 12 17:08:18.308151 systemd[1]: run-netns-cni\x2d3a9dbded\x2d0634\x2de26a\x2dbe99\x2d0534f4a6872d.mount: Deactivated successfully. Sep 12 17:08:18.361003 kubelet[3645]: I0912 17:08:18.360881 3645 scope.go:117] "RemoveContainer" containerID="eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e" Sep 12 17:08:18.373433 containerd[2017]: time="2025-09-12T17:08:18.373358012Z" level=info msg="RemoveContainer for \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\"" Sep 12 17:08:18.387391 containerd[2017]: time="2025-09-12T17:08:18.387201416Z" level=info msg="RemoveContainer for \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" returns successfully" Sep 12 17:08:18.389180 kubelet[3645]: I0912 17:08:18.389124 3645 scope.go:117] "RemoveContainer" containerID="eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e" Sep 12 17:08:18.391076 containerd[2017]: time="2025-09-12T17:08:18.390976544Z" level=error msg="ContainerStatus for \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\": not found" Sep 12 17:08:18.391394 kubelet[3645]: E0912 17:08:18.391341 3645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\": not found" containerID="eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e" Sep 12 17:08:18.391629 kubelet[3645]: I0912 17:08:18.391404 3645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e"} err="failed to get container status \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\": rpc error: code = NotFound desc = an error occurred when try to find container \"eea8c115eb692e10d2f104008a652d5a829f5662b62facc3b4439a813057349e\": not found" Sep 12 17:08:18.425118 kubelet[3645]: I0912 17:08:18.425035 3645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbmb5\" (UniqueName: \"kubernetes.io/projected/d9593beb-c142-41ba-95c3-de6f41b7c5f1-kube-api-access-gbmb5\") pod \"d9593beb-c142-41ba-95c3-de6f41b7c5f1\" (UID: \"d9593beb-c142-41ba-95c3-de6f41b7c5f1\") " Sep 12 17:08:18.425283 kubelet[3645]: I0912 17:08:18.425122 3645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9593beb-c142-41ba-95c3-de6f41b7c5f1-calico-apiserver-certs\") pod \"d9593beb-c142-41ba-95c3-de6f41b7c5f1\" (UID: \"d9593beb-c142-41ba-95c3-de6f41b7c5f1\") " Sep 12 17:08:18.446574 systemd[1]: var-lib-kubelet-pods-d9593beb\x2dc142\x2d41ba\x2d95c3\x2dde6f41b7c5f1-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:08:18.451241 kubelet[3645]: I0912 17:08:18.451167 3645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9593beb-c142-41ba-95c3-de6f41b7c5f1-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d9593beb-c142-41ba-95c3-de6f41b7c5f1" (UID: "d9593beb-c142-41ba-95c3-de6f41b7c5f1"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:08:18.456723 kubelet[3645]: I0912 17:08:18.456575 3645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9593beb-c142-41ba-95c3-de6f41b7c5f1-kube-api-access-gbmb5" (OuterVolumeSpecName: "kube-api-access-gbmb5") pod "d9593beb-c142-41ba-95c3-de6f41b7c5f1" (UID: "d9593beb-c142-41ba-95c3-de6f41b7c5f1"). InnerVolumeSpecName "kube-api-access-gbmb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:08:18.459373 systemd[1]: var-lib-kubelet-pods-d9593beb\x2dc142\x2d41ba\x2d95c3\x2dde6f41b7c5f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgbmb5.mount: Deactivated successfully. Sep 12 17:08:18.527321 kubelet[3645]: I0912 17:08:18.527268 3645 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9593beb-c142-41ba-95c3-de6f41b7c5f1-calico-apiserver-certs\") on node \"ip-172-31-16-146\" DevicePath \"\"" Sep 12 17:08:18.528293 kubelet[3645]: I0912 17:08:18.528109 3645 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gbmb5\" (UniqueName: \"kubernetes.io/projected/d9593beb-c142-41ba-95c3-de6f41b7c5f1-kube-api-access-gbmb5\") on node \"ip-172-31-16-146\" DevicePath \"\"" Sep 12 17:08:18.664941 containerd[2017]: time="2025-09-12T17:08:18.664682349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:18.670017 containerd[2017]: time="2025-09-12T17:08:18.669932721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:08:18.671260 containerd[2017]: time="2025-09-12T17:08:18.671185701Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:18.677336 containerd[2017]: time="2025-09-12T17:08:18.676256685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:18.679892 containerd[2017]: time="2025-09-12T17:08:18.679748649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 2.241826211s" Sep 12 17:08:18.679892 containerd[2017]: time="2025-09-12T17:08:18.679878981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:08:18.684389 containerd[2017]: time="2025-09-12T17:08:18.684204393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:08:18.695898 containerd[2017]: time="2025-09-12T17:08:18.693729069Z" level=info msg="CreateContainer within sandbox \"34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:08:18.694908 systemd[1]: Removed slice kubepods-besteffort-podd9593beb_c142_41ba_95c3_de6f41b7c5f1.slice - libcontainer container kubepods-besteffort-podd9593beb_c142_41ba_95c3_de6f41b7c5f1.slice. Sep 12 17:08:18.699673 systemd[1]: Started sshd@14-172.31.16.146:22-139.178.68.195:53338.service - OpenSSH per-connection server daemon (139.178.68.195:53338). Sep 12 17:08:18.725897 containerd[2017]: time="2025-09-12T17:08:18.722910045Z" level=info msg="Container 416fb74b7b1b6caba8c44669574591b97683f5007ed9d0e59e658aeb5d428ee4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:18.772733 containerd[2017]: time="2025-09-12T17:08:18.772428142Z" level=info msg="CreateContainer within sandbox \"34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"416fb74b7b1b6caba8c44669574591b97683f5007ed9d0e59e658aeb5d428ee4\"" Sep 12 17:08:18.776883 containerd[2017]: time="2025-09-12T17:08:18.774968170Z" level=info msg="StartContainer for \"416fb74b7b1b6caba8c44669574591b97683f5007ed9d0e59e658aeb5d428ee4\"" Sep 12 17:08:18.782172 containerd[2017]: time="2025-09-12T17:08:18.781593826Z" level=info msg="connecting to shim 416fb74b7b1b6caba8c44669574591b97683f5007ed9d0e59e658aeb5d428ee4" address="unix:///run/containerd/s/0e05472364e841a2dc1dfd67813ced545cbb1cf25a6ba02e823d40038fff48f9" protocol=ttrpc version=3 Sep 12 17:08:18.836558 systemd[1]: Started cri-containerd-416fb74b7b1b6caba8c44669574591b97683f5007ed9d0e59e658aeb5d428ee4.scope - libcontainer container 416fb74b7b1b6caba8c44669574591b97683f5007ed9d0e59e658aeb5d428ee4. Sep 12 17:08:18.938967 containerd[2017]: time="2025-09-12T17:08:18.936750874Z" level=info msg="StartContainer for \"416fb74b7b1b6caba8c44669574591b97683f5007ed9d0e59e658aeb5d428ee4\" returns successfully" Sep 12 17:08:18.949215 sshd[6140]: Accepted publickey for core from 139.178.68.195 port 53338 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:18.954177 sshd-session[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:18.964642 systemd-logind[1992]: New session 15 of user core. Sep 12 17:08:18.973049 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:08:19.035381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622051490.mount: Deactivated successfully. Sep 12 17:08:19.262380 sshd[6172]: Connection closed by 139.178.68.195 port 53338 Sep 12 17:08:19.262929 sshd-session[6140]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:19.274536 systemd[1]: sshd@14-172.31.16.146:22-139.178.68.195:53338.service: Deactivated successfully. Sep 12 17:08:19.278009 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:08:19.279713 systemd-logind[1992]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:08:19.284067 systemd-logind[1992]: Removed session 15. Sep 12 17:08:19.594117 kubelet[3645]: I0912 17:08:19.593972 3645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9593beb-c142-41ba-95c3-de6f41b7c5f1" path="/var/lib/kubelet/pods/d9593beb-c142-41ba-95c3-de6f41b7c5f1/volumes" Sep 12 17:08:21.123375 ntpd[1987]: Deleting interface #14 cali38112029d19, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=12 secs Sep 12 17:08:21.124036 ntpd[1987]: 12 Sep 17:08:21 ntpd[1987]: Deleting interface #14 cali38112029d19, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=12 secs Sep 12 17:08:22.188523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443036094.mount: Deactivated successfully. Sep 12 17:08:22.994389 containerd[2017]: time="2025-09-12T17:08:22.994311951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:22.996562 containerd[2017]: time="2025-09-12T17:08:22.996457467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:08:22.997529 containerd[2017]: time="2025-09-12T17:08:22.997457535Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:23.003005 containerd[2017]: time="2025-09-12T17:08:23.002909111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:23.005076 containerd[2017]: time="2025-09-12T17:08:23.004458419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.320069082s" Sep 12 17:08:23.005076 containerd[2017]: time="2025-09-12T17:08:23.004516751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:08:23.006948 containerd[2017]: time="2025-09-12T17:08:23.006860471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:08:23.013808 containerd[2017]: time="2025-09-12T17:08:23.013177439Z" level=info msg="CreateContainer within sandbox \"7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:08:23.030373 containerd[2017]: time="2025-09-12T17:08:23.030039659Z" level=info msg="Container 346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:23.057086 containerd[2017]: time="2025-09-12T17:08:23.057009983Z" level=info msg="CreateContainer within sandbox \"7b5c34bf90a6931db7941dbfaff5f8b76518d6f7361de4e27e331feb5cf72760\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\"" Sep 12 17:08:23.058877 containerd[2017]: time="2025-09-12T17:08:23.058673639Z" level=info msg="StartContainer for \"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\"" Sep 12 17:08:23.063205 containerd[2017]: time="2025-09-12T17:08:23.062957999Z" level=info msg="connecting to shim 346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f" address="unix:///run/containerd/s/84572901c3a18c97f5410b5e52ab8e17e4aa6ef132fa329285c933b0e2af8773" protocol=ttrpc version=3 Sep 12 17:08:23.106893 systemd[1]: Started cri-containerd-346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f.scope - libcontainer container 346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f. Sep 12 17:08:23.197251 containerd[2017]: time="2025-09-12T17:08:23.197180712Z" level=info msg="StartContainer for \"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\" returns successfully" Sep 12 17:08:23.421985 kubelet[3645]: I0912 17:08:23.421455 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-kdphf" podStartSLOduration=30.299404376 podStartE2EDuration="47.421406965s" podCreationTimestamp="2025-09-12 17:07:36 +0000 UTC" firstStartedPulling="2025-09-12 17:08:05.884626822 +0000 UTC m=+60.603434186" lastFinishedPulling="2025-09-12 17:08:23.006629411 +0000 UTC m=+77.725436775" observedRunningTime="2025-09-12 17:08:23.419599441 +0000 UTC m=+78.138406829" watchObservedRunningTime="2025-09-12 17:08:23.421406965 +0000 UTC m=+78.140214329" Sep 12 17:08:24.308356 systemd[1]: Started sshd@15-172.31.16.146:22-139.178.68.195:48452.service - OpenSSH per-connection server daemon (139.178.68.195:48452). Sep 12 17:08:24.516349 sshd[6236]: Accepted publickey for core from 139.178.68.195 port 48452 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:24.524303 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:24.563669 systemd-logind[1992]: New session 16 of user core. Sep 12 17:08:24.574980 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:08:24.946871 containerd[2017]: time="2025-09-12T17:08:24.946640644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\" id:\"4f9198fc4d79a6a67cae5ca1c5007710441393a913a0bf21f95a031fb629bc27\" pid:6252 exit_status:1 exited_at:{seconds:1757696904 nanos:945108664}" Sep 12 17:08:24.962074 sshd[6257]: Connection closed by 139.178.68.195 port 48452 Sep 12 17:08:24.965311 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:24.977849 systemd[1]: sshd@15-172.31.16.146:22-139.178.68.195:48452.service: Deactivated successfully. Sep 12 17:08:24.987382 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:08:24.991028 systemd-logind[1992]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:08:24.997014 systemd-logind[1992]: Removed session 16. Sep 12 17:08:25.267759 containerd[2017]: time="2025-09-12T17:08:25.267504146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58\" id:\"dcffd23a60af1862f15f2615f46a967d3fb9a85e4db377676c456f4017bfcd52\" pid:6288 exited_at:{seconds:1757696905 nanos:266284214}" Sep 12 17:08:25.306494 containerd[2017]: time="2025-09-12T17:08:25.306401354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:25.313899 containerd[2017]: time="2025-09-12T17:08:25.313785578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:08:25.317370 containerd[2017]: time="2025-09-12T17:08:25.317061086Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:25.338833 containerd[2017]: time="2025-09-12T17:08:25.337339094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:25.344000 containerd[2017]: time="2025-09-12T17:08:25.343919426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.336970755s" Sep 12 17:08:25.344000 containerd[2017]: time="2025-09-12T17:08:25.343988222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:08:25.358885 containerd[2017]: time="2025-09-12T17:08:25.358621970Z" level=info msg="CreateContainer within sandbox \"34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:08:25.378225 containerd[2017]: time="2025-09-12T17:08:25.378149198Z" level=info msg="Container a9fc47fa80c9de2358906e70a96bf7296deab0e115c29d2b1ee3d0ddfe1c906c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:08:25.444297 containerd[2017]: time="2025-09-12T17:08:25.444221055Z" level=info msg="CreateContainer within sandbox \"34c615332143c0b15ed2dc4101baf7cfc652b5f67fdb44fb7ef1dfa6be0c7a9c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a9fc47fa80c9de2358906e70a96bf7296deab0e115c29d2b1ee3d0ddfe1c906c\"" Sep 12 17:08:25.446140 containerd[2017]: time="2025-09-12T17:08:25.445969683Z" level=info msg="StartContainer for \"a9fc47fa80c9de2358906e70a96bf7296deab0e115c29d2b1ee3d0ddfe1c906c\"" Sep 12 17:08:25.454032 containerd[2017]: time="2025-09-12T17:08:25.453969315Z" level=info msg="connecting to shim a9fc47fa80c9de2358906e70a96bf7296deab0e115c29d2b1ee3d0ddfe1c906c" address="unix:///run/containerd/s/0e05472364e841a2dc1dfd67813ced545cbb1cf25a6ba02e823d40038fff48f9" protocol=ttrpc version=3 Sep 12 17:08:25.535027 systemd[1]: Started cri-containerd-a9fc47fa80c9de2358906e70a96bf7296deab0e115c29d2b1ee3d0ddfe1c906c.scope - libcontainer container a9fc47fa80c9de2358906e70a96bf7296deab0e115c29d2b1ee3d0ddfe1c906c. Sep 12 17:08:25.617352 containerd[2017]: time="2025-09-12T17:08:25.617162440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58\" id:\"9f62275d01a133f01908f5e6262d609eec73ba2cd100db2355d3058c0430a3dc\" pid:6311 exited_at:{seconds:1757696905 nanos:616479184}" Sep 12 17:08:25.710180 containerd[2017]: time="2025-09-12T17:08:25.710067892Z" level=info msg="StartContainer for \"a9fc47fa80c9de2358906e70a96bf7296deab0e115c29d2b1ee3d0ddfe1c906c\" returns successfully" Sep 12 17:08:25.756402 containerd[2017]: time="2025-09-12T17:08:25.756278536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\" id:\"6aaeb5bc7ee3b51a99148413972ade8db641c1e31c7cfa06371ad7ea12b890f7\" pid:6346 exit_status:1 exited_at:{seconds:1757696905 nanos:755500360}" Sep 12 17:08:25.797550 kubelet[3645]: I0912 17:08:25.797413 3645 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:08:25.799358 kubelet[3645]: I0912 17:08:25.797951 3645 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:08:30.012053 systemd[1]: Started sshd@16-172.31.16.146:22-139.178.68.195:41908.service - OpenSSH per-connection server daemon (139.178.68.195:41908). Sep 12 17:08:30.208434 sshd[6387]: Accepted publickey for core from 139.178.68.195 port 41908 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:30.212260 sshd-session[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:30.221147 systemd-logind[1992]: New session 17 of user core. Sep 12 17:08:30.229085 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:08:30.485320 sshd[6390]: Connection closed by 139.178.68.195 port 41908 Sep 12 17:08:30.486496 sshd-session[6387]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:30.493620 systemd[1]: sshd@16-172.31.16.146:22-139.178.68.195:41908.service: Deactivated successfully. Sep 12 17:08:30.500570 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:08:30.502897 systemd-logind[1992]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:08:30.508115 systemd-logind[1992]: Removed session 17. Sep 12 17:08:35.534181 systemd[1]: Started sshd@17-172.31.16.146:22-139.178.68.195:41920.service - OpenSSH per-connection server daemon (139.178.68.195:41920). Sep 12 17:08:35.764628 sshd[6402]: Accepted publickey for core from 139.178.68.195 port 41920 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:35.769057 sshd-session[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:35.782132 systemd-logind[1992]: New session 18 of user core. Sep 12 17:08:35.792106 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:08:36.133139 sshd[6407]: Connection closed by 139.178.68.195 port 41920 Sep 12 17:08:36.134329 sshd-session[6402]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:36.145172 systemd[1]: sshd@17-172.31.16.146:22-139.178.68.195:41920.service: Deactivated successfully. Sep 12 17:08:36.151081 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:08:36.153703 systemd-logind[1992]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:08:36.173506 systemd[1]: Started sshd@18-172.31.16.146:22-139.178.68.195:41928.service - OpenSSH per-connection server daemon (139.178.68.195:41928). Sep 12 17:08:36.181504 systemd-logind[1992]: Removed session 18. Sep 12 17:08:36.389318 sshd[6419]: Accepted publickey for core from 139.178.68.195 port 41928 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:36.390659 sshd-session[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:36.399082 systemd-logind[1992]: New session 19 of user core. Sep 12 17:08:36.408043 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:08:37.090786 sshd[6422]: Connection closed by 139.178.68.195 port 41928 Sep 12 17:08:37.091268 sshd-session[6419]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:37.100397 systemd-logind[1992]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:08:37.102619 systemd[1]: sshd@18-172.31.16.146:22-139.178.68.195:41928.service: Deactivated successfully. Sep 12 17:08:37.110588 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:08:37.137626 systemd-logind[1992]: Removed session 19. Sep 12 17:08:37.138193 systemd[1]: Started sshd@19-172.31.16.146:22-139.178.68.195:41930.service - OpenSSH per-connection server daemon (139.178.68.195:41930). Sep 12 17:08:37.214014 kubelet[3645]: I0912 17:08:37.213521 3645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:08:37.274850 kubelet[3645]: I0912 17:08:37.274704 3645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-drgp6" podStartSLOduration=40.455186504 podStartE2EDuration="1m0.27465155s" podCreationTimestamp="2025-09-12 17:07:37 +0000 UTC" firstStartedPulling="2025-09-12 17:08:05.526195136 +0000 UTC m=+60.245002536" lastFinishedPulling="2025-09-12 17:08:25.345660206 +0000 UTC m=+80.064467582" observedRunningTime="2025-09-12 17:08:26.448283044 +0000 UTC m=+81.167090420" watchObservedRunningTime="2025-09-12 17:08:37.27465155 +0000 UTC m=+91.993458998" Sep 12 17:08:37.374935 sshd[6432]: Accepted publickey for core from 139.178.68.195 port 41930 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:37.377273 sshd-session[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:37.388878 systemd-logind[1992]: New session 20 of user core. Sep 12 17:08:37.394325 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:08:38.855737 sshd[6437]: Connection closed by 139.178.68.195 port 41930 Sep 12 17:08:38.857553 sshd-session[6432]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:38.867851 systemd[1]: sshd@19-172.31.16.146:22-139.178.68.195:41930.service: Deactivated successfully. Sep 12 17:08:38.879520 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:08:38.886723 systemd-logind[1992]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:08:38.914189 systemd-logind[1992]: Removed session 20. Sep 12 17:08:38.920032 systemd[1]: Started sshd@20-172.31.16.146:22-139.178.68.195:41936.service - OpenSSH per-connection server daemon (139.178.68.195:41936). Sep 12 17:08:39.141867 sshd[6458]: Accepted publickey for core from 139.178.68.195 port 41936 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:39.145490 sshd-session[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:39.160160 systemd-logind[1992]: New session 21 of user core. Sep 12 17:08:39.165762 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:08:39.930656 sshd[6463]: Connection closed by 139.178.68.195 port 41936 Sep 12 17:08:39.931846 sshd-session[6458]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:39.946157 systemd[1]: sshd@20-172.31.16.146:22-139.178.68.195:41936.service: Deactivated successfully. Sep 12 17:08:39.953396 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:08:39.955444 systemd-logind[1992]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:08:39.978231 systemd[1]: Started sshd@21-172.31.16.146:22-139.178.68.195:41670.service - OpenSSH per-connection server daemon (139.178.68.195:41670). Sep 12 17:08:39.990387 systemd-logind[1992]: Removed session 21. Sep 12 17:08:40.206951 sshd[6474]: Accepted publickey for core from 139.178.68.195 port 41670 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:40.210313 sshd-session[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:40.220436 systemd-logind[1992]: New session 22 of user core. Sep 12 17:08:40.229105 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:08:40.517264 sshd[6483]: Connection closed by 139.178.68.195 port 41670 Sep 12 17:08:40.517744 sshd-session[6474]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:40.532499 systemd[1]: sshd@21-172.31.16.146:22-139.178.68.195:41670.service: Deactivated successfully. Sep 12 17:08:40.540284 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:08:40.543892 systemd-logind[1992]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:08:40.549385 systemd-logind[1992]: Removed session 22. Sep 12 17:08:42.596498 containerd[2017]: time="2025-09-12T17:08:42.596439524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059\" id:\"c5b788dca6fc234ac839a844a258c0128c7d184321f7fdaff3b553152b125b80\" pid:6506 exited_at:{seconds:1757696922 nanos:596048312}" Sep 12 17:08:45.555077 systemd[1]: Started sshd@22-172.31.16.146:22-139.178.68.195:41682.service - OpenSSH per-connection server daemon (139.178.68.195:41682). Sep 12 17:08:45.751121 sshd[6518]: Accepted publickey for core from 139.178.68.195 port 41682 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:45.753760 sshd-session[6518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:45.763874 systemd-logind[1992]: New session 23 of user core. Sep 12 17:08:45.770130 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:08:46.019077 sshd[6521]: Connection closed by 139.178.68.195 port 41682 Sep 12 17:08:46.019980 sshd-session[6518]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:46.027335 systemd[1]: sshd@22-172.31.16.146:22-139.178.68.195:41682.service: Deactivated successfully. Sep 12 17:08:46.032368 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:08:46.037564 systemd-logind[1992]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:08:46.039380 systemd-logind[1992]: Removed session 23. Sep 12 17:08:47.390298 containerd[2017]: time="2025-09-12T17:08:47.390232392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059\" id:\"b203f331719439ab956b418d4981d68af7d99a6167480879ced84d14a05acaba\" pid:6545 exited_at:{seconds:1757696927 nanos:389709708}" Sep 12 17:08:51.058158 systemd[1]: Started sshd@23-172.31.16.146:22-139.178.68.195:42970.service - OpenSSH per-connection server daemon (139.178.68.195:42970). Sep 12 17:08:51.262485 sshd[6555]: Accepted publickey for core from 139.178.68.195 port 42970 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:51.265128 sshd-session[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:51.274826 systemd-logind[1992]: New session 24 of user core. Sep 12 17:08:51.285086 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:08:51.566735 sshd[6558]: Connection closed by 139.178.68.195 port 42970 Sep 12 17:08:51.567709 sshd-session[6555]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:51.574927 systemd[1]: sshd@23-172.31.16.146:22-139.178.68.195:42970.service: Deactivated successfully. Sep 12 17:08:51.580658 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:08:51.582437 systemd-logind[1992]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:08:51.586394 systemd-logind[1992]: Removed session 24. Sep 12 17:08:55.504813 containerd[2017]: time="2025-09-12T17:08:55.503104940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58\" id:\"b91176ab65abae64ae87a32b38ca7b5cca5cb95656208a0c479090a659feb01e\" pid:6579 exited_at:{seconds:1757696935 nanos:502606232}" Sep 12 17:08:55.628858 containerd[2017]: time="2025-09-12T17:08:55.628517925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\" id:\"acee2f07d2ac006b1d9c7f7188e87c5dcffd9270060127fa5a8ae89bccb796af\" pid:6602 exited_at:{seconds:1757696935 nanos:627982953}" Sep 12 17:08:56.611757 systemd[1]: Started sshd@24-172.31.16.146:22-139.178.68.195:42978.service - OpenSSH per-connection server daemon (139.178.68.195:42978). Sep 12 17:08:56.849754 sshd[6617]: Accepted publickey for core from 139.178.68.195 port 42978 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:08:56.855115 sshd-session[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:56.869450 systemd-logind[1992]: New session 25 of user core. Sep 12 17:08:56.881096 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:08:57.213459 sshd[6620]: Connection closed by 139.178.68.195 port 42978 Sep 12 17:08:57.214586 sshd-session[6617]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:57.227652 systemd[1]: sshd@24-172.31.16.146:22-139.178.68.195:42978.service: Deactivated successfully. Sep 12 17:08:57.238050 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:08:57.249719 systemd-logind[1992]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:08:57.253857 systemd-logind[1992]: Removed session 25. Sep 12 17:09:02.262291 systemd[1]: Started sshd@25-172.31.16.146:22-139.178.68.195:58342.service - OpenSSH per-connection server daemon (139.178.68.195:58342). Sep 12 17:09:02.469295 sshd[6635]: Accepted publickey for core from 139.178.68.195 port 58342 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:09:02.473555 sshd-session[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:02.488861 systemd-logind[1992]: New session 26 of user core. Sep 12 17:09:02.498474 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:09:02.858916 sshd[6638]: Connection closed by 139.178.68.195 port 58342 Sep 12 17:09:02.859481 sshd-session[6635]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:02.871709 systemd-logind[1992]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:09:02.873218 systemd[1]: sshd@25-172.31.16.146:22-139.178.68.195:58342.service: Deactivated successfully. Sep 12 17:09:02.884379 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:09:02.889678 systemd-logind[1992]: Removed session 26. Sep 12 17:09:05.597895 containerd[2017]: time="2025-09-12T17:09:05.597433026Z" level=info msg="StopPodSandbox for \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\"" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.702 [WARNING][6657] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.703 [INFO][6657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.703 [INFO][6657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" iface="eth0" netns="" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.703 [INFO][6657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.703 [INFO][6657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.784 [INFO][6665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.784 [INFO][6665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.784 [INFO][6665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.803 [WARNING][6665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.803 [INFO][6665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.806 [INFO][6665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:09:05.835925 containerd[2017]: 2025-09-12 17:09:05.815 [INFO][6657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:05.837332 containerd[2017]: time="2025-09-12T17:09:05.836422111Z" level=info msg="TearDown network for sandbox \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" successfully" Sep 12 17:09:05.837332 containerd[2017]: time="2025-09-12T17:09:05.836460571Z" level=info msg="StopPodSandbox for \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" returns successfully" Sep 12 17:09:05.838920 containerd[2017]: time="2025-09-12T17:09:05.838645963Z" level=info msg="RemovePodSandbox for \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\"" Sep 12 17:09:05.839289 containerd[2017]: time="2025-09-12T17:09:05.839157187Z" level=info msg="Forcibly stopping sandbox \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\"" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:05.965 [WARNING][6680] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" WorkloadEndpoint="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:05.966 [INFO][6680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:05.966 [INFO][6680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" iface="eth0" netns="" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:05.966 [INFO][6680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:05.966 [INFO][6680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:06.025 [INFO][6687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:06.026 [INFO][6687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:06.026 [INFO][6687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:06.046 [WARNING][6687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:06.046 [INFO][6687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" HandleID="k8s-pod-network.3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Workload="ip--172--31--16--146-k8s-calico--apiserver--6b56fc6589--9xsvp-eth0" Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:06.052 [INFO][6687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:09:06.063592 containerd[2017]: 2025-09-12 17:09:06.058 [INFO][6680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb" Sep 12 17:09:06.066927 containerd[2017]: time="2025-09-12T17:09:06.065457161Z" level=info msg="TearDown network for sandbox \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" successfully" Sep 12 17:09:06.074126 containerd[2017]: time="2025-09-12T17:09:06.074045165Z" level=info msg="Ensure that sandbox 3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb in task-service has been cleanup successfully" Sep 12 17:09:06.082312 containerd[2017]: time="2025-09-12T17:09:06.081743405Z" level=info msg="RemovePodSandbox \"3ec1978c226f090caa6411d04487304d1522c3a9c6fce74cf99d06bab92d6bdb\" returns successfully" Sep 12 17:09:07.902295 systemd[1]: Started sshd@26-172.31.16.146:22-139.178.68.195:58356.service - OpenSSH per-connection server daemon (139.178.68.195:58356). Sep 12 17:09:08.115867 sshd[6694]: Accepted publickey for core from 139.178.68.195 port 58356 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:09:08.122690 sshd-session[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:08.141639 systemd-logind[1992]: New session 27 of user core. Sep 12 17:09:08.150116 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:09:08.564683 sshd[6697]: Connection closed by 139.178.68.195 port 58356 Sep 12 17:09:08.564478 sshd-session[6694]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:08.578904 systemd[1]: sshd@26-172.31.16.146:22-139.178.68.195:58356.service: Deactivated successfully. Sep 12 17:09:08.579905 systemd-logind[1992]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:09:08.590006 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:09:08.606253 systemd-logind[1992]: Removed session 27. Sep 12 17:09:10.717053 containerd[2017]: time="2025-09-12T17:09:10.716657544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\" id:\"6e19fde6c6fb4d3b277991a8cee36cac06ffc6c7924e3fd539bd02656d7abecb\" pid:6721 exited_at:{seconds:1757696950 nanos:716030772}" Sep 12 17:09:13.614289 systemd[1]: Started sshd@27-172.31.16.146:22-139.178.68.195:48710.service - OpenSSH per-connection server daemon (139.178.68.195:48710). Sep 12 17:09:13.897968 sshd[6735]: Accepted publickey for core from 139.178.68.195 port 48710 ssh2: RSA SHA256:i+pB9ar7yBJb7oWs2I9Nz9/8YnGp+wXFOInh2xR8DaY Sep 12 17:09:13.901840 sshd-session[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:13.918246 systemd-logind[1992]: New session 28 of user core. Sep 12 17:09:13.926234 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:09:14.257951 sshd[6738]: Connection closed by 139.178.68.195 port 48710 Sep 12 17:09:14.259503 sshd-session[6735]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:14.271611 systemd[1]: sshd@27-172.31.16.146:22-139.178.68.195:48710.service: Deactivated successfully. Sep 12 17:09:14.277270 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:09:14.284123 systemd-logind[1992]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:09:14.291875 systemd-logind[1992]: Removed session 28. Sep 12 17:09:17.423418 containerd[2017]: time="2025-09-12T17:09:17.423231209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4edc2929c5098c22dfcaa0559c7184aaf9946b39d8d531bf03c3abdd04e6059\" id:\"78cbeb8e607fe01e1d02ff7db0177b1207ff8ea02801f48131129f78ee38f078\" pid:6762 exited_at:{seconds:1757696957 nanos:420095993}" Sep 12 17:09:25.417217 containerd[2017]: time="2025-09-12T17:09:25.417102181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c9476f904a276e53fb57aef4a15853fbbae8edb45959a6447aa05b9303faf58\" id:\"b633c71de3195a3cf69ddff7a13350f9cb8f6a8bcf5d251a297aaa2fad167343\" pid:6788 exited_at:{seconds:1757696965 nanos:416232781}" Sep 12 17:09:25.550397 containerd[2017]: time="2025-09-12T17:09:25.550315249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"346249e1a7802eb5564fecbe97b5503a18d99fa956d0a480ff0fb8b68370451f\" id:\"15ef2b3ccffa5cb2683e844fa855990963ef576a9dff4fee7fe09421f90256b9\" pid:6810 exited_at:{seconds:1757696965 nanos:549546109}" Sep 12 17:09:27.542846 systemd[1]: cri-containerd-605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32.scope: Deactivated successfully. Sep 12 17:09:27.543486 systemd[1]: cri-containerd-605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32.scope: Consumed 6.793s CPU time, 61.1M memory peak, 64K read from disk. Sep 12 17:09:27.556296 containerd[2017]: time="2025-09-12T17:09:27.556218435Z" level=info msg="received exit event container_id:\"605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32\" id:\"605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32\" pid:3445 exit_status:1 exited_at:{seconds:1757696967 nanos:555481935}" Sep 12 17:09:27.557829 containerd[2017]: time="2025-09-12T17:09:27.557655075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32\" id:\"605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32\" pid:3445 exit_status:1 exited_at:{seconds:1757696967 nanos:555481935}" Sep 12 17:09:27.611605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32-rootfs.mount: Deactivated successfully. Sep 12 17:09:27.685661 kubelet[3645]: I0912 17:09:27.685402 3645 scope.go:117] "RemoveContainer" containerID="605ecb69143ea4cf10fd8681b3d52febc93cac3473bdfb4efe6e323c78ef3b32" Sep 12 17:09:27.691809 containerd[2017]: time="2025-09-12T17:09:27.691454656Z" level=info msg="CreateContainer within sandbox \"ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:09:27.708376 containerd[2017]: time="2025-09-12T17:09:27.708319684Z" level=info msg="Container ac65ecccbd4d111a6c2e6d4bfedd9fbe3964d497ea06fd0c37ccdbe3b8331a88: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:09:27.753702 containerd[2017]: time="2025-09-12T17:09:27.753560536Z" level=info msg="CreateContainer within sandbox \"ccc0bed5aa173bb5011510397cc9a671230c8d21da25316a67aaa64d88065429\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ac65ecccbd4d111a6c2e6d4bfedd9fbe3964d497ea06fd0c37ccdbe3b8331a88\"" Sep 12 17:09:27.755850 containerd[2017]: time="2025-09-12T17:09:27.755654488Z" level=info msg="StartContainer for \"ac65ecccbd4d111a6c2e6d4bfedd9fbe3964d497ea06fd0c37ccdbe3b8331a88\"" Sep 12 17:09:27.765112 containerd[2017]: time="2025-09-12T17:09:27.764946100Z" level=info msg="connecting to shim ac65ecccbd4d111a6c2e6d4bfedd9fbe3964d497ea06fd0c37ccdbe3b8331a88" address="unix:///run/containerd/s/93a60b6ba735c7604c03740726806d83ab8a577e5b0984ccea147e239ff63e99" protocol=ttrpc version=3 Sep 12 17:09:27.841582 systemd[1]: Started cri-containerd-ac65ecccbd4d111a6c2e6d4bfedd9fbe3964d497ea06fd0c37ccdbe3b8331a88.scope - libcontainer container ac65ecccbd4d111a6c2e6d4bfedd9fbe3964d497ea06fd0c37ccdbe3b8331a88. Sep 12 17:09:27.946754 containerd[2017]: time="2025-09-12T17:09:27.946708421Z" level=info msg="StartContainer for \"ac65ecccbd4d111a6c2e6d4bfedd9fbe3964d497ea06fd0c37ccdbe3b8331a88\" returns successfully" Sep 12 17:09:28.126158 kubelet[3645]: E0912 17:09:28.125652 3645 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-146?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 17:09:28.397762 systemd[1]: cri-containerd-9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40.scope: Deactivated successfully. Sep 12 17:09:28.399208 systemd[1]: cri-containerd-9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40.scope: Consumed 25.729s CPU time, 107.9M memory peak, 224K read from disk. Sep 12 17:09:28.402181 containerd[2017]: time="2025-09-12T17:09:28.402101644Z" level=info msg="received exit event container_id:\"9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40\" id:\"9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40\" pid:3976 exit_status:1 exited_at:{seconds:1757696968 nanos:401568628}" Sep 12 17:09:28.404313 containerd[2017]: time="2025-09-12T17:09:28.404241844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40\" id:\"9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40\" pid:3976 exit_status:1 exited_at:{seconds:1757696968 nanos:401568628}" Sep 12 17:09:28.454736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40-rootfs.mount: Deactivated successfully. Sep 12 17:09:28.692873 kubelet[3645]: I0912 17:09:28.692712 3645 scope.go:117] "RemoveContainer" containerID="9e4feb6cd137a51ef5b293d76227c952a25364c48952fe8c6bc411f1d9642d40" Sep 12 17:09:28.701793 containerd[2017]: time="2025-09-12T17:09:28.700536689Z" level=info msg="CreateContainer within sandbox \"95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:09:28.727861 containerd[2017]: time="2025-09-12T17:09:28.727700105Z" level=info msg="Container 852e8ab5dc3d6e087bc157eef61f4bb1a09560f8cc526a780834768b8d4f9932: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:09:28.749457 containerd[2017]: time="2025-09-12T17:09:28.749388017Z" level=info msg="CreateContainer within sandbox \"95dace60668b3ff61cf0e92f27b9ae83571006976d83f13f482020e4e0279a06\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"852e8ab5dc3d6e087bc157eef61f4bb1a09560f8cc526a780834768b8d4f9932\"" Sep 12 17:09:28.751114 containerd[2017]: time="2025-09-12T17:09:28.751058705Z" level=info msg="StartContainer for \"852e8ab5dc3d6e087bc157eef61f4bb1a09560f8cc526a780834768b8d4f9932\"" Sep 12 17:09:28.753788 containerd[2017]: time="2025-09-12T17:09:28.753671729Z" level=info msg="connecting to shim 852e8ab5dc3d6e087bc157eef61f4bb1a09560f8cc526a780834768b8d4f9932" address="unix:///run/containerd/s/3fad37ead4c3191f339fcebe555eab2fc2796aeb5f5e22c85f38254abfd322ab" protocol=ttrpc version=3 Sep 12 17:09:28.811093 systemd[1]: Started cri-containerd-852e8ab5dc3d6e087bc157eef61f4bb1a09560f8cc526a780834768b8d4f9932.scope - libcontainer container 852e8ab5dc3d6e087bc157eef61f4bb1a09560f8cc526a780834768b8d4f9932. Sep 12 17:09:28.885569 containerd[2017]: time="2025-09-12T17:09:28.885508626Z" level=info msg="StartContainer for \"852e8ab5dc3d6e087bc157eef61f4bb1a09560f8cc526a780834768b8d4f9932\" returns successfully" Sep 12 17:09:33.309493 systemd[1]: cri-containerd-da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e.scope: Deactivated successfully. Sep 12 17:09:33.311041 systemd[1]: cri-containerd-da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e.scope: Consumed 5.380s CPU time, 22M memory peak, 128K read from disk. Sep 12 17:09:33.316511 containerd[2017]: time="2025-09-12T17:09:33.316437788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e\" id:\"da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e\" pid:3479 exit_status:1 exited_at:{seconds:1757696973 nanos:315674360}" Sep 12 17:09:33.317682 containerd[2017]: time="2025-09-12T17:09:33.317139200Z" level=info msg="received exit event container_id:\"da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e\" id:\"da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e\" pid:3479 exit_status:1 exited_at:{seconds:1757696973 nanos:315674360}" Sep 12 17:09:33.363212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e-rootfs.mount: Deactivated successfully. Sep 12 17:09:33.729578 kubelet[3645]: I0912 17:09:33.729430 3645 scope.go:117] "RemoveContainer" containerID="da663108fb8d368f96ada6536549f6104e9ecfba352e1dfd55ef36ea3dc5924e" Sep 12 17:09:33.735152 containerd[2017]: time="2025-09-12T17:09:33.735081826Z" level=info msg="CreateContainer within sandbox \"18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:09:33.757595 containerd[2017]: time="2025-09-12T17:09:33.757515406Z" level=info msg="Container 139077e1b841e059431f624d3e19d66750f23db64ac8d1444530cc9600dfc41f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:09:33.773560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698154966.mount: Deactivated successfully. Sep 12 17:09:33.780304 containerd[2017]: time="2025-09-12T17:09:33.780230590Z" level=info msg="CreateContainer within sandbox \"18c4916f0ad3969f2719b716dd8f04c385dfaa657d9282e99397e4cf94445b2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"139077e1b841e059431f624d3e19d66750f23db64ac8d1444530cc9600dfc41f\"" Sep 12 17:09:33.781320 containerd[2017]: time="2025-09-12T17:09:33.781249954Z" level=info msg="StartContainer for \"139077e1b841e059431f624d3e19d66750f23db64ac8d1444530cc9600dfc41f\"" Sep 12 17:09:33.785746 containerd[2017]: time="2025-09-12T17:09:33.785650714Z" level=info msg="connecting to shim 139077e1b841e059431f624d3e19d66750f23db64ac8d1444530cc9600dfc41f" address="unix:///run/containerd/s/ce688d1e8a28820cc3d35919d2c973bf7e1df8aadd39fe674a6d5e8c6988da3a" protocol=ttrpc version=3 Sep 12 17:09:33.837112 systemd[1]: Started cri-containerd-139077e1b841e059431f624d3e19d66750f23db64ac8d1444530cc9600dfc41f.scope - libcontainer container 139077e1b841e059431f624d3e19d66750f23db64ac8d1444530cc9600dfc41f. Sep 12 17:09:33.923406 containerd[2017]: time="2025-09-12T17:09:33.923348879Z" level=info msg="StartContainer for \"139077e1b841e059431f624d3e19d66750f23db64ac8d1444530cc9600dfc41f\" returns successfully" Sep 12 17:09:38.126621 kubelet[3645]: E0912 17:09:38.126327 3645 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-146?timeout=10s\": context deadline exceeded"