Jun 20 18:23:12.085622 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jun 20 18:23:12.085730 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jun 20 16:58:52 -00 2025 Jun 20 18:23:12.085759 kernel: KASLR disabled due to lack of seed Jun 20 18:23:12.085776 kernel: efi: EFI v2.7 by EDK II Jun 20 18:23:12.085793 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Jun 20 18:23:12.085809 kernel: secureboot: Secure boot disabled Jun 20 18:23:12.085826 kernel: ACPI: Early table checksum verification disabled Jun 20 18:23:12.085841 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jun 20 18:23:12.085856 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jun 20 18:23:12.085871 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 20 18:23:12.088815 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jun 20 18:23:12.088836 kernel: ACPI: FACS 0x0000000078630000 000040 Jun 20 18:23:12.088852 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 20 18:23:12.088868 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jun 20 18:23:12.088886 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jun 20 18:23:12.088902 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jun 20 18:23:12.088923 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 20 18:23:12.088940 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jun 20 18:23:12.088956 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jun 20 18:23:12.088972 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jun 20 18:23:12.088989 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jun 20 18:23:12.089004 kernel: printk: legacy bootconsole [uart0] enabled Jun 20 18:23:12.089021 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 20 18:23:12.089037 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jun 20 18:23:12.089055 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] Jun 20 18:23:12.089072 kernel: Zone ranges: Jun 20 18:23:12.089095 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 20 18:23:12.089114 kernel: DMA32 empty Jun 20 18:23:12.089130 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jun 20 18:23:12.089146 kernel: Device empty Jun 20 18:23:12.089163 kernel: Movable zone start for each node Jun 20 18:23:12.089179 kernel: Early memory node ranges Jun 20 18:23:12.089196 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jun 20 18:23:12.089213 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jun 20 18:23:12.089229 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jun 20 18:23:12.089246 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jun 20 18:23:12.089262 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jun 20 18:23:12.089279 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jun 20 18:23:12.089302 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jun 20 18:23:12.089319 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jun 20 18:23:12.089342 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jun 20 18:23:12.089360 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jun 20 18:23:12.089377 kernel: psci: probing for conduit method from ACPI. Jun 20 18:23:12.089397 kernel: psci: PSCIv1.0 detected in firmware. Jun 20 18:23:12.089414 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:23:12.089431 kernel: psci: Trusted OS migration not required Jun 20 18:23:12.089448 kernel: psci: SMC Calling Convention v1.1 Jun 20 18:23:12.089464 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 20 18:23:12.089481 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 20 18:23:12.089498 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:23:12.089515 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:23:12.089532 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:23:12.089548 kernel: CPU features: detected: Spectre-v2 Jun 20 18:23:12.089565 kernel: CPU features: detected: Spectre-v3a Jun 20 18:23:12.089582 kernel: CPU features: detected: Spectre-BHB Jun 20 18:23:12.089603 kernel: CPU features: detected: ARM erratum 1742098 Jun 20 18:23:12.089620 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jun 20 18:23:12.089636 kernel: alternatives: applying boot alternatives Jun 20 18:23:12.091752 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:23:12.091798 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:23:12.091817 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:23:12.091835 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:23:12.091852 kernel: Fallback order for Node 0: 0 Jun 20 18:23:12.091869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jun 20 18:23:12.091888 kernel: Policy zone: Normal Jun 20 18:23:12.091915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:23:12.091932 kernel: software IO TLB: area num 2. Jun 20 18:23:12.091949 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jun 20 18:23:12.091967 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:23:12.091984 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:23:12.092003 kernel: rcu: RCU event tracing is enabled. Jun 20 18:23:12.092020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:23:12.092038 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:23:12.092055 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:23:12.092073 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:23:12.092089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:23:12.092106 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:23:12.092128 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:23:12.092145 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:23:12.092162 kernel: GICv3: 96 SPIs implemented Jun 20 18:23:12.092178 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:23:12.092195 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:23:12.092213 kernel: GICv3: GICv3 features: 16 PPIs Jun 20 18:23:12.092230 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jun 20 18:23:12.092247 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jun 20 18:23:12.092265 kernel: ITS [mem 0x10080000-0x1009ffff] Jun 20 18:23:12.092282 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jun 20 18:23:12.092299 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jun 20 18:23:12.092320 kernel: GICv3: using LPI property table @0x00000004000e0000 Jun 20 18:23:12.092339 kernel: ITS: Using hypervisor restricted LPI range [128] Jun 20 18:23:12.092356 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jun 20 18:23:12.092372 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:23:12.092389 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jun 20 18:23:12.092405 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jun 20 18:23:12.092422 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jun 20 18:23:12.092439 kernel: Console: colour dummy device 80x25 Jun 20 18:23:12.092456 kernel: printk: legacy console [tty1] enabled Jun 20 18:23:12.092474 kernel: ACPI: Core revision 20240827 Jun 20 18:23:12.092491 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jun 20 18:23:12.092513 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:23:12.092530 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 18:23:12.092547 kernel: landlock: Up and running. Jun 20 18:23:12.092564 kernel: SELinux: Initializing. Jun 20 18:23:12.092581 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:23:12.092598 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:23:12.092615 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:23:12.092633 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:23:12.092651 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 18:23:12.092722 kernel: Remapping and enabling EFI services. Jun 20 18:23:12.092741 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:23:12.092759 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:23:12.092776 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jun 20 18:23:12.092793 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jun 20 18:23:12.092811 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jun 20 18:23:12.092829 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:23:12.092847 kernel: SMP: Total of 2 processors activated. Jun 20 18:23:12.092863 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:23:12.092886 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:23:12.092915 kernel: CPU features: detected: 32-bit EL1 Support Jun 20 18:23:12.092933 kernel: CPU features: detected: CRC32 instructions Jun 20 18:23:12.092955 kernel: alternatives: applying system-wide alternatives Jun 20 18:23:12.092974 kernel: Memory: 3813536K/4030464K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 212156K reserved, 0K cma-reserved) Jun 20 18:23:12.092993 kernel: devtmpfs: initialized Jun 20 18:23:12.093011 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:23:12.093030 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:23:12.093052 kernel: 17024 pages in range for non-PLT usage Jun 20 18:23:12.093070 kernel: 508544 pages in range for PLT usage Jun 20 18:23:12.093088 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:23:12.093106 kernel: SMBIOS 3.0.0 present. Jun 20 18:23:12.093125 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jun 20 18:23:12.093142 kernel: DMI: Memory slots populated: 0/0 Jun 20 18:23:12.093160 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:23:12.093178 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:23:12.093196 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:23:12.093218 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:23:12.093235 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:23:12.093253 kernel: audit: type=2000 audit(0.242:1): state=initialized audit_enabled=0 res=1 Jun 20 18:23:12.093270 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:23:12.093289 kernel: cpuidle: using governor menu Jun 20 18:23:12.093308 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:23:12.093325 kernel: ASID allocator initialised with 65536 entries Jun 20 18:23:12.093343 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:23:12.093364 kernel: Serial: AMBA PL011 UART driver Jun 20 18:23:12.093384 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:23:12.093401 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:23:12.093419 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:23:12.093437 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:23:12.093455 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:23:12.093472 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:23:12.093490 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:23:12.093508 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:23:12.093531 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:23:12.093549 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:23:12.093568 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:23:12.093587 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:23:12.093606 kernel: ACPI: Interpreter enabled Jun 20 18:23:12.093624 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:23:12.093643 kernel: ACPI: MCFG table detected, 1 entries Jun 20 18:23:12.100730 kernel: ACPI: CPU0 has been hot-added Jun 20 18:23:12.100763 kernel: ACPI: CPU1 has been hot-added Jun 20 18:23:12.100782 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jun 20 18:23:12.101090 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 18:23:12.101286 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 20 18:23:12.101483 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 20 18:23:12.101703 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jun 20 18:23:12.101903 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jun 20 18:23:12.101929 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jun 20 18:23:12.101948 kernel: acpiphp: Slot [1] registered Jun 20 18:23:12.101976 kernel: acpiphp: Slot [2] registered Jun 20 18:23:12.101994 kernel: acpiphp: Slot [3] registered Jun 20 18:23:12.102011 kernel: acpiphp: Slot [4] registered Jun 20 18:23:12.102029 kernel: acpiphp: Slot [5] registered Jun 20 18:23:12.102047 kernel: acpiphp: Slot [6] registered Jun 20 18:23:12.102064 kernel: acpiphp: Slot [7] registered Jun 20 18:23:12.102082 kernel: acpiphp: Slot [8] registered Jun 20 18:23:12.102099 kernel: acpiphp: Slot [9] registered Jun 20 18:23:12.102117 kernel: acpiphp: Slot [10] registered Jun 20 18:23:12.102138 kernel: acpiphp: Slot [11] registered Jun 20 18:23:12.102156 kernel: acpiphp: Slot [12] registered Jun 20 18:23:12.102173 kernel: acpiphp: Slot [13] registered Jun 20 18:23:12.102190 kernel: acpiphp: Slot [14] registered Jun 20 18:23:12.102207 kernel: acpiphp: Slot [15] registered Jun 20 18:23:12.102224 kernel: acpiphp: Slot [16] registered Jun 20 18:23:12.102242 kernel: acpiphp: Slot [17] registered Jun 20 18:23:12.102259 kernel: acpiphp: Slot [18] registered Jun 20 18:23:12.102276 kernel: acpiphp: Slot [19] registered Jun 20 18:23:12.102293 kernel: acpiphp: Slot [20] registered Jun 20 18:23:12.102315 kernel: acpiphp: Slot [21] registered Jun 20 18:23:12.102332 kernel: acpiphp: Slot [22] registered Jun 20 18:23:12.102350 kernel: acpiphp: Slot [23] registered Jun 20 18:23:12.102367 kernel: acpiphp: Slot [24] registered Jun 20 18:23:12.102384 kernel: acpiphp: Slot [25] registered Jun 20 18:23:12.102401 kernel: acpiphp: Slot [26] registered Jun 20 18:23:12.102419 kernel: acpiphp: Slot [27] registered Jun 20 18:23:12.102436 kernel: acpiphp: Slot [28] registered Jun 20 18:23:12.102453 kernel: acpiphp: Slot [29] registered Jun 20 18:23:12.102474 kernel: acpiphp: Slot [30] registered Jun 20 18:23:12.102492 kernel: acpiphp: Slot [31] registered Jun 20 18:23:12.102509 kernel: PCI host bridge to bus 0000:00 Jun 20 18:23:12.104482 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jun 20 18:23:12.104808 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 20 18:23:12.105007 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jun 20 18:23:12.105189 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jun 20 18:23:12.105452 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jun 20 18:23:12.105752 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jun 20 18:23:12.105978 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jun 20 18:23:12.106197 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jun 20 18:23:12.106406 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jun 20 18:23:12.106620 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 18:23:12.108956 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jun 20 18:23:12.109182 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jun 20 18:23:12.109370 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jun 20 18:23:12.109567 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jun 20 18:23:12.109796 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 18:23:12.109999 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jun 20 18:23:12.110203 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jun 20 18:23:12.110420 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jun 20 18:23:12.110623 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jun 20 18:23:12.115465 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jun 20 18:23:12.115737 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jun 20 18:23:12.115936 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 20 18:23:12.116125 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jun 20 18:23:12.116153 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 20 18:23:12.116172 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 20 18:23:12.116205 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 20 18:23:12.116223 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 20 18:23:12.116241 kernel: iommu: Default domain type: Translated Jun 20 18:23:12.116260 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:23:12.116278 kernel: efivars: Registered efivars operations Jun 20 18:23:12.116296 kernel: vgaarb: loaded Jun 20 18:23:12.116314 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:23:12.116333 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:23:12.116352 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:23:12.116378 kernel: pnp: PnP ACPI init Jun 20 18:23:12.116623 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jun 20 18:23:12.117613 kernel: pnp: PnP ACPI: found 1 devices Jun 20 18:23:12.117688 kernel: NET: Registered PF_INET protocol family Jun 20 18:23:12.117712 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:23:12.117730 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:23:12.117748 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:23:12.117767 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:23:12.117793 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:23:12.117812 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:23:12.117830 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:23:12.117848 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:23:12.117866 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:23:12.117884 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:23:12.117901 kernel: kvm [1]: HYP mode not available Jun 20 18:23:12.117919 kernel: Initialise system trusted keyrings Jun 20 18:23:12.117938 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:23:12.117961 kernel: Key type asymmetric registered Jun 20 18:23:12.117979 kernel: Asymmetric key parser 'x509' registered Jun 20 18:23:12.117997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 20 18:23:12.118014 kernel: io scheduler mq-deadline registered Jun 20 18:23:12.118032 kernel: io scheduler kyber registered Jun 20 18:23:12.118050 kernel: io scheduler bfq registered Jun 20 18:23:12.118296 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jun 20 18:23:12.118325 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 20 18:23:12.118349 kernel: ACPI: button: Power Button [PWRB] Jun 20 18:23:12.118368 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jun 20 18:23:12.118385 kernel: ACPI: button: Sleep Button [SLPB] Jun 20 18:23:12.118403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:23:12.118421 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 20 18:23:12.118618 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jun 20 18:23:12.118646 kernel: printk: legacy console [ttyS0] disabled Jun 20 18:23:12.118942 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jun 20 18:23:12.118964 kernel: printk: legacy console [ttyS0] enabled Jun 20 18:23:12.118990 kernel: printk: legacy bootconsole [uart0] disabled Jun 20 18:23:12.119009 kernel: thunder_xcv, ver 1.0 Jun 20 18:23:12.119026 kernel: thunder_bgx, ver 1.0 Jun 20 18:23:12.119044 kernel: nicpf, ver 1.0 Jun 20 18:23:12.119061 kernel: nicvf, ver 1.0 Jun 20 18:23:12.119332 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:23:12.119515 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:23:11 UTC (1750443791) Jun 20 18:23:12.119540 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:23:12.119564 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jun 20 18:23:12.119582 kernel: watchdog: NMI not fully supported Jun 20 18:23:12.119599 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:23:12.119617 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:23:12.119634 kernel: Segment Routing with IPv6 Jun 20 18:23:12.119652 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:23:12.121129 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:23:12.121148 kernel: Key type dns_resolver registered Jun 20 18:23:12.121165 kernel: registered taskstats version 1 Jun 20 18:23:12.121183 kernel: Loading compiled-in X.509 certificates Jun 20 18:23:12.121209 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 4dab98fc4de70d482d00f54d1877f6231fc25377' Jun 20 18:23:12.121227 kernel: Demotion targets for Node 0: null Jun 20 18:23:12.121244 kernel: Key type .fscrypt registered Jun 20 18:23:12.121262 kernel: Key type fscrypt-provisioning registered Jun 20 18:23:12.121279 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:23:12.121297 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:23:12.121314 kernel: ima: No architecture policies found Jun 20 18:23:12.121332 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:23:12.121349 kernel: clk: Disabling unused clocks Jun 20 18:23:12.121370 kernel: PM: genpd: Disabling unused power domains Jun 20 18:23:12.121388 kernel: Warning: unable to open an initial console. Jun 20 18:23:12.121405 kernel: Freeing unused kernel memory: 39424K Jun 20 18:23:12.121423 kernel: Run /init as init process Jun 20 18:23:12.121440 kernel: with arguments: Jun 20 18:23:12.121457 kernel: /init Jun 20 18:23:12.121474 kernel: with environment: Jun 20 18:23:12.121491 kernel: HOME=/ Jun 20 18:23:12.121509 kernel: TERM=linux Jun 20 18:23:12.121529 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:23:12.121549 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:23:12.121573 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:23:12.121593 systemd[1]: Detected virtualization amazon. Jun 20 18:23:12.121612 systemd[1]: Detected architecture arm64. Jun 20 18:23:12.121630 systemd[1]: Running in initrd. Jun 20 18:23:12.121648 systemd[1]: No hostname configured, using default hostname. Jun 20 18:23:12.121693 systemd[1]: Hostname set to . Jun 20 18:23:12.121712 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:23:12.121731 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:23:12.121750 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:23:12.121769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:23:12.121790 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:23:12.121856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:23:12.121879 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:23:12.121906 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:23:12.121929 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:23:12.121950 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:23:12.121969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:23:12.121989 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:23:12.122008 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:23:12.122026 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:23:12.122049 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:23:12.122073 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:23:12.122093 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:23:12.122112 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:23:12.122131 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:23:12.122150 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:23:12.122170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:23:12.122190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:23:12.122210 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:23:12.122233 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:23:12.122252 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:23:12.122272 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:23:12.122291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:23:12.122311 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 18:23:12.122330 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:23:12.122349 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:23:12.122368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:23:12.122391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:12.122410 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:23:12.122430 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:23:12.122449 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:23:12.122469 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:23:12.122537 systemd-journald[257]: Collecting audit messages is disabled. Jun 20 18:23:12.122580 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:23:12.122599 kernel: Bridge firewalling registered Jun 20 18:23:12.122625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:23:12.122646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:23:12.122739 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:23:12.122765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:23:12.122785 systemd-journald[257]: Journal started Jun 20 18:23:12.122826 systemd-journald[257]: Runtime Journal (/run/log/journal/ec25d52bf8de425e57d9d94bf208c7af) is 8M, max 75.3M, 67.3M free. Jun 20 18:23:12.042077 systemd-modules-load[259]: Inserted module 'overlay' Jun 20 18:23:12.081905 systemd-modules-load[259]: Inserted module 'br_netfilter' Jun 20 18:23:12.135705 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:23:12.148315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:12.153694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:23:12.163889 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:23:12.187861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:23:12.199731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:23:12.218274 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 18:23:12.229617 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:23:12.235340 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:23:12.246195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:23:12.264459 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:23:12.295463 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:23:12.372782 systemd-resolved[298]: Positive Trust Anchors: Jun 20 18:23:12.372817 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:23:12.372879 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:23:12.478699 kernel: SCSI subsystem initialized Jun 20 18:23:12.487682 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:23:12.499705 kernel: iscsi: registered transport (tcp) Jun 20 18:23:12.521103 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:23:12.521176 kernel: QLogic iSCSI HBA Driver Jun 20 18:23:12.554840 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:23:12.593113 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:23:12.602623 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:23:12.656716 kernel: random: crng init done Jun 20 18:23:12.656987 systemd-resolved[298]: Defaulting to hostname 'linux'. Jun 20 18:23:12.660879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:23:12.663561 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:23:12.695409 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:23:12.701510 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:23:12.808751 kernel: raid6: neonx8 gen() 6354 MB/s Jun 20 18:23:12.825714 kernel: raid6: neonx4 gen() 6307 MB/s Jun 20 18:23:12.842718 kernel: raid6: neonx2 gen() 5384 MB/s Jun 20 18:23:12.859715 kernel: raid6: neonx1 gen() 3884 MB/s Jun 20 18:23:12.876715 kernel: raid6: int64x8 gen() 3588 MB/s Jun 20 18:23:12.893717 kernel: raid6: int64x4 gen() 3662 MB/s Jun 20 18:23:12.910720 kernel: raid6: int64x2 gen() 3540 MB/s Jun 20 18:23:12.928681 kernel: raid6: int64x1 gen() 2734 MB/s Jun 20 18:23:12.928758 kernel: raid6: using algorithm neonx8 gen() 6354 MB/s Jun 20 18:23:12.947591 kernel: raid6: .... xor() 4681 MB/s, rmw enabled Jun 20 18:23:12.947695 kernel: raid6: using neon recovery algorithm Jun 20 18:23:12.955723 kernel: xor: measuring software checksum speed Jun 20 18:23:12.957867 kernel: 8regs : 11185 MB/sec Jun 20 18:23:12.957937 kernel: 32regs : 13019 MB/sec Jun 20 18:23:12.959182 kernel: arm64_neon : 8833 MB/sec Jun 20 18:23:12.959264 kernel: xor: using function: 32regs (13019 MB/sec) Jun 20 18:23:13.054710 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:23:13.065730 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:23:13.075854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:23:13.134526 systemd-udevd[506]: Using default interface naming scheme 'v255'. Jun 20 18:23:13.145645 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:23:13.151901 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:23:13.192082 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Jun 20 18:23:13.235459 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:23:13.242101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:23:13.371947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:23:13.378565 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:23:13.538718 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 20 18:23:13.538795 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jun 20 18:23:13.548284 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 20 18:23:13.548625 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 20 18:23:13.558690 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:7b:43:35:82:f9 Jun 20 18:23:13.562091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:23:13.564755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:13.572266 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 20 18:23:13.573393 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 20 18:23:13.567448 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:13.572118 (udev-worker)[579]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:23:13.578724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:13.581607 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:23:13.595797 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 18:23:13.606122 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 18:23:13.606183 kernel: GPT:9289727 != 16777215 Jun 20 18:23:13.606209 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 18:23:13.606385 kernel: GPT:9289727 != 16777215 Jun 20 18:23:13.608169 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 18:23:13.608249 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:23:13.632028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:13.653698 kernel: nvme nvme0: using unchecked data buffer Jun 20 18:23:13.808546 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 20 18:23:13.857715 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:23:13.880592 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 20 18:23:13.898601 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 20 18:23:13.904215 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 20 18:23:13.935588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:23:13.941724 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:23:13.944424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:23:13.952399 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:23:13.958026 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:23:13.964395 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:23:13.991156 disk-uuid[686]: Primary Header is updated. Jun 20 18:23:13.991156 disk-uuid[686]: Secondary Entries is updated. Jun 20 18:23:13.991156 disk-uuid[686]: Secondary Header is updated. Jun 20 18:23:14.001718 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:23:14.015431 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:23:15.021704 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:23:15.024703 disk-uuid[687]: The operation has completed successfully. Jun 20 18:23:15.199648 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:23:15.200260 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:23:15.303831 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:23:15.340413 sh[955]: Success Jun 20 18:23:15.361988 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:23:15.362100 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:23:15.364082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 18:23:15.376703 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 20 18:23:15.476512 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:23:15.483814 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:23:15.500113 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:23:15.525880 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 18:23:15.525949 kernel: BTRFS: device fsid eac9c4a0-5098-4f12-a7ad-af09956ff0e3 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (979) Jun 20 18:23:15.530338 kernel: BTRFS info (device dm-0): first mount of filesystem eac9c4a0-5098-4f12-a7ad-af09956ff0e3 Jun 20 18:23:15.530404 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:15.531590 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 18:23:15.647164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:23:15.652210 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:23:15.657715 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:23:15.663265 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:23:15.670029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:23:15.729751 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Jun 20 18:23:15.734275 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:15.734344 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:15.735627 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 18:23:15.749840 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:15.750599 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:23:15.757080 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:23:15.858877 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:23:15.866147 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:23:15.937074 systemd-networkd[1148]: lo: Link UP Jun 20 18:23:15.937096 systemd-networkd[1148]: lo: Gained carrier Jun 20 18:23:15.942480 systemd-networkd[1148]: Enumeration completed Jun 20 18:23:15.944242 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:23:15.946640 systemd[1]: Reached target network.target - Network. Jun 20 18:23:15.948815 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:15.950784 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:23:15.961650 systemd-networkd[1148]: eth0: Link UP Jun 20 18:23:15.961697 systemd-networkd[1148]: eth0: Gained carrier Jun 20 18:23:15.961717 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:15.975731 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.21.135/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:23:16.200739 ignition[1069]: Ignition 2.21.0 Jun 20 18:23:16.200772 ignition[1069]: Stage: fetch-offline Jun 20 18:23:16.201137 ignition[1069]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:16.201159 ignition[1069]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:16.204424 ignition[1069]: Ignition finished successfully Jun 20 18:23:16.211893 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:23:16.218688 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:23:16.257201 ignition[1161]: Ignition 2.21.0 Jun 20 18:23:16.257232 ignition[1161]: Stage: fetch Jun 20 18:23:16.257919 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:16.257943 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:16.258111 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:16.270982 ignition[1161]: PUT result: OK Jun 20 18:23:16.274441 ignition[1161]: parsed url from cmdline: "" Jun 20 18:23:16.274462 ignition[1161]: no config URL provided Jun 20 18:23:16.274478 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:23:16.274503 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:23:16.274542 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:16.278595 ignition[1161]: PUT result: OK Jun 20 18:23:16.280527 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 20 18:23:16.292782 ignition[1161]: GET result: OK Jun 20 18:23:16.294227 ignition[1161]: parsing config with SHA512: bc2273bcd5b1562ff66c6b9d737183ae09ffc48b6a2b084d0b2767ad0cd6ecd4b651aa64db978022bf50342c9106b604e85cb602ddf2c5b7816c50205eb36fdf Jun 20 18:23:16.303317 unknown[1161]: fetched base config from "system" Jun 20 18:23:16.303346 unknown[1161]: fetched base config from "system" Jun 20 18:23:16.303360 unknown[1161]: fetched user config from "aws" Jun 20 18:23:16.305443 ignition[1161]: fetch: fetch complete Jun 20 18:23:16.305455 ignition[1161]: fetch: fetch passed Jun 20 18:23:16.306614 ignition[1161]: Ignition finished successfully Jun 20 18:23:16.315126 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:23:16.321904 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:23:16.360034 ignition[1167]: Ignition 2.21.0 Jun 20 18:23:16.360527 ignition[1167]: Stage: kargs Jun 20 18:23:16.361074 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:16.361097 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:16.361238 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:16.365744 ignition[1167]: PUT result: OK Jun 20 18:23:16.374462 ignition[1167]: kargs: kargs passed Jun 20 18:23:16.374577 ignition[1167]: Ignition finished successfully Jun 20 18:23:16.380623 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:23:16.386189 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:23:16.436103 ignition[1174]: Ignition 2.21.0 Jun 20 18:23:16.437866 ignition[1174]: Stage: disks Jun 20 18:23:16.438512 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:16.438544 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:16.438721 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:16.442022 ignition[1174]: PUT result: OK Jun 20 18:23:16.453092 ignition[1174]: disks: disks passed Jun 20 18:23:16.453439 ignition[1174]: Ignition finished successfully Jun 20 18:23:16.458874 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:23:16.463632 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:23:16.466392 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:23:16.474622 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:23:16.477011 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:23:16.484602 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:23:16.488385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:23:16.541642 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 18:23:16.548637 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:23:16.556032 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:23:16.677694 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 40d60ae8-3eda-4465-8dd7-9dbfcfd71664 r/w with ordered data mode. Quota mode: none. Jun 20 18:23:16.679031 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:23:16.680449 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:23:16.685784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:23:16.700312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:23:16.703301 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 18:23:16.703381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:23:16.703428 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:23:16.728433 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:23:16.737874 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:23:16.763692 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jun 20 18:23:16.767598 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:16.767638 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:16.768886 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 18:23:16.777466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:23:17.099730 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:23:17.108708 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:23:17.117643 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:23:17.135480 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:23:17.431373 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:23:17.436585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:23:17.450467 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:23:17.470033 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:23:17.472952 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:17.508198 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:23:17.519932 ignition[1314]: INFO : Ignition 2.21.0 Jun 20 18:23:17.522182 ignition[1314]: INFO : Stage: mount Jun 20 18:23:17.523844 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:17.523844 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:17.523844 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:17.533234 ignition[1314]: INFO : PUT result: OK Jun 20 18:23:17.537371 ignition[1314]: INFO : mount: mount passed Jun 20 18:23:17.539271 ignition[1314]: INFO : Ignition finished successfully Jun 20 18:23:17.543466 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:23:17.549115 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:23:17.681714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:23:17.728696 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Jun 20 18:23:17.733247 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:23:17.733292 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:23:17.733327 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 18:23:17.742206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:23:17.786473 ignition[1342]: INFO : Ignition 2.21.0 Jun 20 18:23:17.786473 ignition[1342]: INFO : Stage: files Jun 20 18:23:17.789954 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:17.789954 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:17.789954 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:17.797986 ignition[1342]: INFO : PUT result: OK Jun 20 18:23:17.803266 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:23:17.807755 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:23:17.807755 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:23:17.822100 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:23:17.825305 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:23:17.828806 unknown[1342]: wrote ssh authorized keys file for user: core Jun 20 18:23:17.832316 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:23:17.842958 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:23:17.842958 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jun 20 18:23:17.944320 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:23:18.039793 systemd-networkd[1148]: eth0: Gained IPv6LL Jun 20 18:23:18.108456 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:23:18.116337 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:23:18.144921 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:23:18.144921 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:23:18.144921 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:23:18.157286 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:23:18.157286 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:23:18.157286 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jun 20 18:23:18.952836 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 18:23:19.375007 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:23:19.375007 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 18:23:19.382400 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:23:19.386246 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:23:19.386246 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 18:23:19.386246 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:23:19.386246 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:23:19.386246 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:23:19.386246 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:23:19.386246 ignition[1342]: INFO : files: files passed Jun 20 18:23:19.386246 ignition[1342]: INFO : Ignition finished successfully Jun 20 18:23:19.390972 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:23:19.392524 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:23:19.397026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:23:19.444347 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:23:19.446957 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:23:19.463412 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:23:19.463412 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:23:19.471188 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:23:19.477472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:23:19.479455 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:23:19.481564 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:23:19.553606 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:23:19.553989 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:23:19.562565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:23:19.565433 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:23:19.570173 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:23:19.574062 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:23:19.627489 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:23:19.634858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:23:19.670950 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:23:19.676175 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:23:19.680055 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:23:19.683123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:23:19.683377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:23:19.693173 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:23:19.698043 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:23:19.700557 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:23:19.707729 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:23:19.711598 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:23:19.717431 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:23:19.721275 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:23:19.724918 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:23:19.729090 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:23:19.735830 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:23:19.738707 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:23:19.744361 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:23:19.744810 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:23:19.751833 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:23:19.756581 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:23:19.757192 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:23:19.763088 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:23:19.767401 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:23:19.767632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:23:19.776705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:23:19.777132 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:23:19.785484 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:23:19.785724 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:23:19.792232 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:23:19.797199 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:23:19.798115 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:23:19.807782 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:23:19.810769 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:23:19.811637 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:23:19.828098 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:23:19.828337 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:23:19.846096 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:23:19.846341 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:23:19.871695 ignition[1397]: INFO : Ignition 2.21.0 Jun 20 18:23:19.871695 ignition[1397]: INFO : Stage: umount Jun 20 18:23:19.876679 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:23:19.876679 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:23:19.876679 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:23:19.884280 ignition[1397]: INFO : PUT result: OK Jun 20 18:23:19.889330 ignition[1397]: INFO : umount: umount passed Jun 20 18:23:19.893089 ignition[1397]: INFO : Ignition finished successfully Jun 20 18:23:19.895766 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:23:19.898232 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:23:19.902635 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:23:19.902778 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:23:19.906896 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:23:19.907008 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:23:19.910966 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:23:19.911050 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:23:19.915854 systemd[1]: Stopped target network.target - Network. Jun 20 18:23:19.923781 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:23:19.924305 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:23:19.928290 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:23:19.930217 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:23:19.932119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:23:19.934711 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:23:19.936711 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:23:19.940512 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:23:19.940586 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:23:19.943069 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:23:19.943129 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:23:19.946800 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:23:19.946891 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:23:19.949330 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:23:19.949404 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:23:19.953695 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:23:19.959817 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:23:19.963758 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:23:19.965012 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:23:19.965212 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:23:19.973177 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:23:19.973995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:23:20.037420 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:23:20.037756 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:23:20.063028 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:23:20.065572 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:23:20.065870 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:23:20.084306 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:23:20.087154 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 18:23:20.089670 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:23:20.089768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:23:20.093811 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:23:20.095746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:23:20.095857 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:23:20.098959 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:23:20.099083 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:23:20.103566 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:23:20.104033 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:23:20.108880 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:23:20.108985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:23:20.130754 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:23:20.157007 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:23:20.157135 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:23:20.174508 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:23:20.176837 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:23:20.181914 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:23:20.183080 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:23:20.188605 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:23:20.188754 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:23:20.192891 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:23:20.192959 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:23:20.200983 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:23:20.201103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:23:20.209800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:23:20.209935 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:23:20.213088 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:23:20.213219 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:23:20.235447 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:23:20.238253 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 18:23:20.238390 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:23:20.254651 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:23:20.254903 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:23:20.259479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:23:20.259790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:20.270406 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 18:23:20.270539 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:23:20.270630 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:23:20.293830 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:23:20.294053 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:23:20.298691 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:23:20.305953 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:23:20.355478 systemd[1]: Switching root. Jun 20 18:23:20.409935 systemd-journald[257]: Journal stopped Jun 20 18:23:22.920988 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Jun 20 18:23:22.921107 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:23:22.921152 kernel: SELinux: policy capability open_perms=1 Jun 20 18:23:22.921182 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:23:22.921212 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:23:22.921240 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:23:22.921269 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:23:22.921297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:23:22.921323 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:23:22.921351 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 18:23:22.921379 kernel: audit: type=1403 audit(1750443800.878:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:23:22.921419 systemd[1]: Successfully loaded SELinux policy in 90.163ms. Jun 20 18:23:22.921467 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.728ms. Jun 20 18:23:22.921500 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:23:22.921530 systemd[1]: Detected virtualization amazon. Jun 20 18:23:22.921559 systemd[1]: Detected architecture arm64. Jun 20 18:23:22.921589 systemd[1]: Detected first boot. Jun 20 18:23:22.921620 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:23:22.921648 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:23:22.921713 zram_generator::config[1454]: No configuration found. Jun 20 18:23:22.921756 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:23:22.921789 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:23:22.921817 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:23:22.921846 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:23:22.921877 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:23:22.921908 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:23:22.921941 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:23:22.921972 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:23:22.922005 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:23:22.922037 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:23:22.922065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:23:22.922095 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:23:22.922125 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:23:22.922153 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:23:22.922182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:23:22.922210 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:23:22.922240 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:23:22.922275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:23:22.922306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:23:22.922336 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:23:22.922367 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:23:22.922402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:23:22.922432 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:23:22.922461 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:23:22.922496 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:23:22.922524 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:23:22.922551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:23:22.922581 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:23:22.922611 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:23:22.922641 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:23:22.922701 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:23:22.922732 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:23:22.922762 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:23:22.922795 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:23:22.922823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:23:22.922851 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:23:22.922880 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:23:22.922908 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:23:22.922935 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:23:22.922964 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:23:22.922995 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:23:22.923024 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:23:22.923057 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:23:22.923091 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:23:22.923119 systemd[1]: Reached target machines.target - Containers. Jun 20 18:23:22.923147 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:23:22.923174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:23:22.923225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:23:22.923258 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:23:22.923289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:23:22.923317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:23:22.923350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:23:22.923378 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:23:22.923406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:23:22.923434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:23:22.923464 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:23:22.923492 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:23:22.923520 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:23:22.926936 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:23:22.927006 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:23:22.927035 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:23:22.927066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:23:22.927096 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:23:22.927124 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:23:22.927156 kernel: loop: module loaded Jun 20 18:23:22.927186 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:23:22.927271 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:23:22.930780 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:23:22.930826 systemd[1]: Stopped verity-setup.service. Jun 20 18:23:22.930856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:23:22.930892 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:23:22.930925 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:23:22.930958 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:23:22.930987 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:23:22.931015 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:23:22.931045 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:23:22.931073 kernel: fuse: init (API version 7.41) Jun 20 18:23:22.931102 kernel: ACPI: bus type drm_connector registered Jun 20 18:23:22.931130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:23:22.931169 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:23:22.931224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:23:22.931259 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:23:22.931289 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:23:22.931323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:23:22.931351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:23:22.931380 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:23:22.931410 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:23:22.931444 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:23:22.931476 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:23:22.931512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:23:22.931551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:23:22.931581 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:23:22.931609 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:23:22.931649 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:23:22.934323 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:23:22.934356 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:23:22.934393 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:23:22.934425 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:23:22.934456 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:23:22.934487 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:23:22.934516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:23:22.934549 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:23:22.934577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:23:22.934606 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:23:22.934634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:23:22.934740 systemd-journald[1537]: Collecting audit messages is disabled. Jun 20 18:23:22.934798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:23:22.934828 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:23:22.934860 systemd-journald[1537]: Journal started Jun 20 18:23:22.934908 systemd-journald[1537]: Runtime Journal (/run/log/journal/ec25d52bf8de425e57d9d94bf208c7af) is 8M, max 75.3M, 67.3M free. Jun 20 18:23:22.219776 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:23:22.245940 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 18:23:22.946402 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:23:22.246785 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:23:22.943912 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:23:22.947124 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:23:22.951983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:23:22.968129 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:23:22.987927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:23:23.016234 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:23:23.021773 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:23:23.028115 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:23:23.036981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:23:23.071744 kernel: loop0: detected capacity change from 0 to 138376 Jun 20 18:23:23.107571 systemd-journald[1537]: Time spent on flushing to /var/log/journal/ec25d52bf8de425e57d9d94bf208c7af is 60.430ms for 929 entries. Jun 20 18:23:23.107571 systemd-journald[1537]: System Journal (/var/log/journal/ec25d52bf8de425e57d9d94bf208c7af) is 8M, max 195.6M, 187.6M free. Jun 20 18:23:23.197370 systemd-journald[1537]: Received client request to flush runtime journal. Jun 20 18:23:23.111773 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:23:23.121816 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:23:23.206028 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:23:23.205805 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:23:23.232572 kernel: loop1: detected capacity change from 0 to 211168 Jun 20 18:23:23.238731 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:23:23.245815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:23:23.251153 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:23:23.270194 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:23:23.305129 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jun 20 18:23:23.305690 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jun 20 18:23:23.317720 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:23:23.420704 kernel: loop2: detected capacity change from 0 to 107312 Jun 20 18:23:23.545299 kernel: loop3: detected capacity change from 0 to 61240 Jun 20 18:23:23.650703 kernel: loop4: detected capacity change from 0 to 138376 Jun 20 18:23:23.669698 kernel: loop5: detected capacity change from 0 to 211168 Jun 20 18:23:23.700693 kernel: loop6: detected capacity change from 0 to 107312 Jun 20 18:23:23.712692 kernel: loop7: detected capacity change from 0 to 61240 Jun 20 18:23:23.724900 (sd-merge)[1611]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 20 18:23:23.725914 (sd-merge)[1611]: Merged extensions into '/usr'. Jun 20 18:23:23.736529 systemd[1]: Reload requested from client PID 1569 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:23:23.736574 systemd[1]: Reloading... Jun 20 18:23:23.950916 zram_generator::config[1646]: No configuration found. Jun 20 18:23:24.223306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:24.405782 systemd[1]: Reloading finished in 668 ms. Jun 20 18:23:24.433714 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:23:24.438232 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:23:24.454799 systemd[1]: Starting ensure-sysext.service... Jun 20 18:23:24.459997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:23:24.466211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:23:24.500990 systemd[1]: Reload requested from client PID 1689 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:23:24.501014 systemd[1]: Reloading... Jun 20 18:23:24.563884 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 18:23:24.563951 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 18:23:24.564532 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:23:24.565857 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:23:24.571250 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:23:24.575462 systemd-udevd[1691]: Using default interface naming scheme 'v255'. Jun 20 18:23:24.576344 systemd-tmpfiles[1690]: ACLs are not supported, ignoring. Jun 20 18:23:24.576497 systemd-tmpfiles[1690]: ACLs are not supported, ignoring. Jun 20 18:23:24.598736 systemd-tmpfiles[1690]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:23:24.598768 systemd-tmpfiles[1690]: Skipping /boot Jun 20 18:23:24.646459 systemd-tmpfiles[1690]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:23:24.646490 systemd-tmpfiles[1690]: Skipping /boot Jun 20 18:23:24.711031 zram_generator::config[1726]: No configuration found. Jun 20 18:23:24.776748 ldconfig[1565]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:23:24.991724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:25.113128 (udev-worker)[1780]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:23:25.243531 systemd[1]: Reloading finished in 741 ms. Jun 20 18:23:25.264358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:23:25.267842 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:23:25.285337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:23:25.303125 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:23:25.309889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:23:25.318007 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:23:25.326071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:23:25.334014 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:23:25.343106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:23:25.360803 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:23:25.398757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:23:25.407838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:23:25.419381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:23:25.431318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:23:25.434909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:23:25.435219 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:23:25.439773 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:23:25.465360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:23:25.471790 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:23:25.475051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:23:25.475344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:23:25.475742 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:23:25.507504 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:23:25.521744 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:23:25.536507 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:23:25.545215 systemd[1]: Finished ensure-sysext.service. Jun 20 18:23:25.549956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:23:25.551040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:23:25.554230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:23:25.563347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:23:25.563865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:23:25.583144 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:23:25.585745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:23:25.589251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:23:25.594514 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:23:25.599347 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:23:25.651485 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:23:25.669319 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:23:25.673204 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:23:25.713741 augenrules[1902]: No rules Jun 20 18:23:25.717320 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:23:25.719262 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:23:25.759964 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:23:26.029213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:23:26.074794 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:23:26.085602 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:23:26.164768 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:23:26.220542 systemd-networkd[1817]: lo: Link UP Jun 20 18:23:26.220561 systemd-networkd[1817]: lo: Gained carrier Jun 20 18:23:26.225425 systemd-networkd[1817]: Enumeration completed Jun 20 18:23:26.225635 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:23:26.228484 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:26.228505 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:23:26.232222 systemd-networkd[1817]: eth0: Link UP Jun 20 18:23:26.232506 systemd-networkd[1817]: eth0: Gained carrier Jun 20 18:23:26.232541 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:26.232940 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:23:26.238161 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:23:26.243489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:23:26.249834 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.21.135/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:23:26.258480 systemd-resolved[1818]: Positive Trust Anchors: Jun 20 18:23:26.258510 systemd-resolved[1818]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:23:26.258575 systemd-resolved[1818]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:23:26.269238 systemd-resolved[1818]: Defaulting to hostname 'linux'. Jun 20 18:23:26.276692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:23:26.279312 systemd[1]: Reached target network.target - Network. Jun 20 18:23:26.281455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:23:26.284088 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:23:26.286427 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:23:26.289024 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:23:26.292050 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:23:26.294487 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:23:26.297288 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:23:26.299928 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:23:26.299989 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:23:26.301917 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:23:26.305842 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:23:26.310739 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:23:26.318292 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:23:26.321309 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:23:26.323985 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:23:26.336773 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:23:26.339757 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:23:26.343936 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:23:26.347148 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:23:26.350398 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:23:26.352624 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:23:26.355122 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:23:26.355302 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:23:26.358829 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:23:26.367212 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:23:26.373716 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:23:26.380159 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:23:26.389972 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:23:26.396110 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:23:26.398575 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:23:26.403281 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:23:26.410070 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 18:23:26.418360 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:23:26.433377 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 18:23:26.446245 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:23:26.463249 jq[1978]: false Jun 20 18:23:26.467893 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:23:26.476596 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:23:26.480512 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:23:26.488277 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:23:26.491341 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:23:26.497850 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:23:26.505723 extend-filesystems[1979]: Found /dev/nvme0n1p6 Jun 20 18:23:26.512729 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:23:26.516037 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:23:26.516432 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:23:26.546428 extend-filesystems[1979]: Found /dev/nvme0n1p9 Jun 20 18:23:26.562272 extend-filesystems[1979]: Checking size of /dev/nvme0n1p9 Jun 20 18:23:26.553403 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:23:26.553975 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:23:26.632974 update_engine[1990]: I20250620 18:23:26.630545 1990 main.cc:92] Flatcar Update Engine starting Jun 20 18:23:26.641689 jq[1992]: true Jun 20 18:23:26.652771 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:23:26.653757 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:23:26.663221 extend-filesystems[1979]: Resized partition /dev/nvme0n1p9 Jun 20 18:23:26.676471 extend-filesystems[2029]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 18:23:26.685124 tar[1999]: linux-arm64/LICENSE Jun 20 18:23:26.685124 tar[1999]: linux-arm64/helm Jun 20 18:23:26.694140 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 20 18:23:26.699688 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:23:26.699356 dbus-daemon[1976]: [system] SELinux support is enabled Jun 20 18:23:26.710690 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:23:26.710753 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:23:26.713745 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:23:26.713801 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:23:26.720641 dbus-daemon[1976]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:24:50 UTC 2025 (1): Starting Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: ---------------------------------------------------- Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: corporation. Support and training for ntp-4 are Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: available at https://www.nwtime.org/support Jun 20 18:23:26.778051 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: ---------------------------------------------------- Jun 20 18:23:26.720765 (ntainerd)[2019]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:23:26.721937 ntpd[1981]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:24:50 UTC 2025 (1): Starting Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.771 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.776 INFO Fetch successful Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.784 INFO Fetch successful Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.797 INFO Fetch successful Jun 20 18:23:26.797994 coreos-metadata[1975]: Jun 20 18:23:26.797 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 20 18:23:26.806995 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 20 18:23:26.807088 update_engine[1990]: I20250620 18:23:26.745085 1990 update_check_scheduler.cc:74] Next update check in 8m11s Jun 20 18:23:26.807242 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: proto: precision = 0.096 usec (-23) Jun 20 18:23:26.781478 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 18:23:26.721982 ntpd[1981]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.805 INFO Fetch successful Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.806 INFO Fetch failed with 404: resource not found Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.812 INFO Fetch successful Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.814 INFO Fetch successful Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.815 INFO Fetch successful Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.818 INFO Fetch successful Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.818 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 20 18:23:26.841826 coreos-metadata[1975]: Jun 20 18:23:26.819 INFO Fetch successful Jun 20 18:23:26.793124 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:23:26.842718 extend-filesystems[2029]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 20 18:23:26.842718 extend-filesystems[2029]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 18:23:26.842718 extend-filesystems[2029]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 20 18:23:26.722000 ntpd[1981]: ---------------------------------------------------- Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: basedate set to 2025-06-08 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: gps base set to 2025-06-08 (week 2370) Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Listen normally on 3 eth0 172.31.21.135:123 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Listen normally on 4 lo [::1]:123 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: bind(21) AF_INET6 fe80::47b:43ff:fe35:82f9%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: unable to create socket on eth0 (5) for fe80::47b:43ff:fe35:82f9%2#123 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: failed to init interface for address fe80::47b:43ff:fe35:82f9%2 Jun 20 18:23:26.867039 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: Listening on routing socket on fd #21 for interface updates Jun 20 18:23:26.867549 jq[2024]: true Jun 20 18:23:26.870040 extend-filesystems[1979]: Resized filesystem in /dev/nvme0n1p9 Jun 20 18:23:26.722017 ntpd[1981]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:23:26.722034 ntpd[1981]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:23:26.722051 ntpd[1981]: corporation. Support and training for ntp-4 are Jun 20 18:23:26.722067 ntpd[1981]: available at https://www.nwtime.org/support Jun 20 18:23:26.722083 ntpd[1981]: ---------------------------------------------------- Jun 20 18:23:26.735415 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:23:26.796396 ntpd[1981]: proto: precision = 0.096 usec (-23) Jun 20 18:23:26.807811 ntpd[1981]: basedate set to 2025-06-08 Jun 20 18:23:26.807918 ntpd[1981]: gps base set to 2025-06-08 (week 2370) Jun 20 18:23:26.821430 ntpd[1981]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:23:26.821518 ntpd[1981]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:23:26.837411 ntpd[1981]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:23:26.837483 ntpd[1981]: Listen normally on 3 eth0 172.31.21.135:123 Jun 20 18:23:26.837549 ntpd[1981]: Listen normally on 4 lo [::1]:123 Jun 20 18:23:26.837625 ntpd[1981]: bind(21) AF_INET6 fe80::47b:43ff:fe35:82f9%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:26.837695 ntpd[1981]: unable to create socket on eth0 (5) for fe80::47b:43ff:fe35:82f9%2#123 Jun 20 18:23:26.837722 ntpd[1981]: failed to init interface for address fe80::47b:43ff:fe35:82f9%2 Jun 20 18:23:26.837778 ntpd[1981]: Listening on routing socket on fd #21 for interface updates Jun 20 18:23:26.889835 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:26.890721 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:23:26.894479 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:26.894479 ntpd[1981]: 20 Jun 18:23:26 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:26.889892 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:23:26.907323 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:23:26.907761 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:23:26.926238 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 18:23:26.968540 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:23:26.972234 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:23:27.034175 systemd-logind[1989]: Watching system buttons on /dev/input/event0 (Power Button) Jun 20 18:23:27.034226 systemd-logind[1989]: Watching system buttons on /dev/input/event1 (Sleep Button) Jun 20 18:23:27.035135 systemd-logind[1989]: New seat seat0. Jun 20 18:23:27.037047 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:23:27.043868 bash[2102]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:23:27.047833 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:23:27.057960 systemd[1]: Starting sshkeys.service... Jun 20 18:23:27.147008 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 18:23:27.158693 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 18:23:27.361589 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:23:27.364380 locksmithd[2039]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:23:27.450047 coreos-metadata[2109]: Jun 20 18:23:27.449 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:23:27.452051 coreos-metadata[2109]: Jun 20 18:23:27.451 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 20 18:23:27.452892 coreos-metadata[2109]: Jun 20 18:23:27.452 INFO Fetch successful Jun 20 18:23:27.452892 coreos-metadata[2109]: Jun 20 18:23:27.452 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 18:23:27.453846 coreos-metadata[2109]: Jun 20 18:23:27.453 INFO Fetch successful Jun 20 18:23:27.458826 unknown[2109]: wrote ssh authorized keys file for user: core Jun 20 18:23:27.539702 update-ssh-keys[2150]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:23:27.542747 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 18:23:27.553573 systemd[1]: Finished sshkeys.service. Jun 20 18:23:27.626833 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 18:23:27.636830 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 18:23:27.638730 dbus-daemon[1976]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2036 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 18:23:27.649979 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 18:23:27.718243 containerd[2019]: time="2025-06-20T18:23:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 18:23:27.723343 ntpd[1981]: bind(24) AF_INET6 fe80::47b:43ff:fe35:82f9%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:27.724044 ntpd[1981]: 20 Jun 18:23:27 ntpd[1981]: bind(24) AF_INET6 fe80::47b:43ff:fe35:82f9%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:23:27.724044 ntpd[1981]: 20 Jun 18:23:27 ntpd[1981]: unable to create socket on eth0 (6) for fe80::47b:43ff:fe35:82f9%2#123 Jun 20 18:23:27.724044 ntpd[1981]: 20 Jun 18:23:27 ntpd[1981]: failed to init interface for address fe80::47b:43ff:fe35:82f9%2 Jun 20 18:23:27.723828 ntpd[1981]: unable to create socket on eth0 (6) for fe80::47b:43ff:fe35:82f9%2#123 Jun 20 18:23:27.723854 ntpd[1981]: failed to init interface for address fe80::47b:43ff:fe35:82f9%2 Jun 20 18:23:27.728996 containerd[2019]: time="2025-06-20T18:23:27.728920992Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 18:23:27.806693 containerd[2019]: time="2025-06-20T18:23:27.806612328Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.632µs" Jun 20 18:23:27.806693 containerd[2019]: time="2025-06-20T18:23:27.806686716Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 18:23:27.806866 containerd[2019]: time="2025-06-20T18:23:27.806725596Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.807012648Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.807055476Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.807111168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.807242352Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.807268908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.807600901Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.807632221Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.809026297Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.809753473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 18:23:27.810684 containerd[2019]: time="2025-06-20T18:23:27.810126229Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 18:23:27.812572 containerd[2019]: time="2025-06-20T18:23:27.812505145Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:23:27.812694 containerd[2019]: time="2025-06-20T18:23:27.812603629Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:23:27.812694 containerd[2019]: time="2025-06-20T18:23:27.812630761Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 18:23:27.819029 containerd[2019]: time="2025-06-20T18:23:27.815644441Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 18:23:27.819528 containerd[2019]: time="2025-06-20T18:23:27.819480757Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 18:23:27.819716 containerd[2019]: time="2025-06-20T18:23:27.819675685Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:23:27.833274 containerd[2019]: time="2025-06-20T18:23:27.833202277Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 18:23:27.833384 containerd[2019]: time="2025-06-20T18:23:27.833316697Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 18:23:27.833384 containerd[2019]: time="2025-06-20T18:23:27.833405509Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 18:23:27.833541 containerd[2019]: time="2025-06-20T18:23:27.833449369Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 18:23:27.833541 containerd[2019]: time="2025-06-20T18:23:27.833481049Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 18:23:27.833541 containerd[2019]: time="2025-06-20T18:23:27.833512957Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 18:23:27.833687 containerd[2019]: time="2025-06-20T18:23:27.833543413Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 18:23:27.833687 containerd[2019]: time="2025-06-20T18:23:27.833572321Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 18:23:27.833687 containerd[2019]: time="2025-06-20T18:23:27.833601061Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 18:23:27.833687 containerd[2019]: time="2025-06-20T18:23:27.833628181Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 18:23:27.833687 containerd[2019]: time="2025-06-20T18:23:27.833669593Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 18:23:27.833899 containerd[2019]: time="2025-06-20T18:23:27.833704717Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.833935501Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.833985997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834028969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834056173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834084913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834111877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834138085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834162805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834189625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834215353Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834241093Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834407209Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 18:23:27.835677 containerd[2019]: time="2025-06-20T18:23:27.834436417Z" level=info msg="Start snapshots syncer" Jun 20 18:23:27.836231 containerd[2019]: time="2025-06-20T18:23:27.835700581Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 18:23:27.837526 containerd[2019]: time="2025-06-20T18:23:27.837434497Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 18:23:27.837758 containerd[2019]: time="2025-06-20T18:23:27.837546661Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 18:23:27.840348 containerd[2019]: time="2025-06-20T18:23:27.840258589Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 18:23:27.840611 containerd[2019]: time="2025-06-20T18:23:27.840565525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 18:23:27.840689 containerd[2019]: time="2025-06-20T18:23:27.840622825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 18:23:27.840761 containerd[2019]: time="2025-06-20T18:23:27.840686125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 18:23:27.840761 containerd[2019]: time="2025-06-20T18:23:27.840723265Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 18:23:27.840854 containerd[2019]: time="2025-06-20T18:23:27.840754957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 18:23:27.840854 containerd[2019]: time="2025-06-20T18:23:27.840782149Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 18:23:27.840854 containerd[2019]: time="2025-06-20T18:23:27.840816817Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 18:23:27.840981 containerd[2019]: time="2025-06-20T18:23:27.840871969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 18:23:27.840981 containerd[2019]: time="2025-06-20T18:23:27.840900625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 18:23:27.840981 containerd[2019]: time="2025-06-20T18:23:27.840930385Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 18:23:27.841626 containerd[2019]: time="2025-06-20T18:23:27.841573921Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:23:27.841767 containerd[2019]: time="2025-06-20T18:23:27.841724341Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:23:27.841822 containerd[2019]: time="2025-06-20T18:23:27.841760665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:23:27.841822 containerd[2019]: time="2025-06-20T18:23:27.841792237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:23:27.841822 containerd[2019]: time="2025-06-20T18:23:27.841813861Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 18:23:27.841970 containerd[2019]: time="2025-06-20T18:23:27.841839673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 18:23:27.841970 containerd[2019]: time="2025-06-20T18:23:27.841866253Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 18:23:27.841970 containerd[2019]: time="2025-06-20T18:23:27.841930837Z" level=info msg="runtime interface created" Jun 20 18:23:27.841970 containerd[2019]: time="2025-06-20T18:23:27.841946017Z" level=info msg="created NRI interface" Jun 20 18:23:27.842132 containerd[2019]: time="2025-06-20T18:23:27.841971289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 18:23:27.842132 containerd[2019]: time="2025-06-20T18:23:27.842001133Z" level=info msg="Connect containerd service" Jun 20 18:23:27.842132 containerd[2019]: time="2025-06-20T18:23:27.842074585Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:23:27.847343 containerd[2019]: time="2025-06-20T18:23:27.847223917Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:23:27.914719 polkitd[2172]: Started polkitd version 126 Jun 20 18:23:27.928884 polkitd[2172]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 18:23:27.929609 polkitd[2172]: Loading rules from directory /run/polkit-1/rules.d Jun 20 18:23:27.929801 polkitd[2172]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 18:23:27.930395 polkitd[2172]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jun 20 18:23:27.930545 polkitd[2172]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 18:23:27.930729 polkitd[2172]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 18:23:27.933034 polkitd[2172]: Finished loading, compiling and executing 2 rules Jun 20 18:23:27.933582 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 18:23:27.941953 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 18:23:27.945567 polkitd[2172]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 18:23:27.984870 systemd-hostnamed[2036]: Hostname set to (transient) Jun 20 18:23:27.985247 systemd-resolved[1818]: System hostname changed to 'ip-172-31-21-135'. Jun 20 18:23:28.151822 systemd-networkd[1817]: eth0: Gained IPv6LL Jun 20 18:23:28.162742 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:23:28.167683 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:23:28.176210 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 20 18:23:28.191187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:28.197981 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:23:28.271005 containerd[2019]: time="2025-06-20T18:23:28.270605363Z" level=info msg="Start subscribing containerd event" Jun 20 18:23:28.271005 containerd[2019]: time="2025-06-20T18:23:28.270921023Z" level=info msg="Start recovering state" Jun 20 18:23:28.271332 containerd[2019]: time="2025-06-20T18:23:28.271292651Z" level=info msg="Start event monitor" Jun 20 18:23:28.271939 containerd[2019]: time="2025-06-20T18:23:28.271587155Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:23:28.271939 containerd[2019]: time="2025-06-20T18:23:28.271616375Z" level=info msg="Start streaming server" Jun 20 18:23:28.271939 containerd[2019]: time="2025-06-20T18:23:28.271688039Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 18:23:28.271939 containerd[2019]: time="2025-06-20T18:23:28.271714667Z" level=info msg="runtime interface starting up..." Jun 20 18:23:28.271939 containerd[2019]: time="2025-06-20T18:23:28.271756607Z" level=info msg="starting plugins..." Jun 20 18:23:28.271939 containerd[2019]: time="2025-06-20T18:23:28.271791599Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 18:23:28.274910 containerd[2019]: time="2025-06-20T18:23:28.274782971Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:23:28.274910 containerd[2019]: time="2025-06-20T18:23:28.274911959Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:23:28.279144 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:23:28.282253 containerd[2019]: time="2025-06-20T18:23:28.281639855Z" level=info msg="containerd successfully booted in 0.562804s" Jun 20 18:23:28.296104 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:23:28.402620 amazon-ssm-agent[2195]: Initializing new seelog logger Jun 20 18:23:28.405308 amazon-ssm-agent[2195]: New Seelog Logger Creation Complete Jun 20 18:23:28.405308 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.405308 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.405308 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 processing appconfig overrides Jun 20 18:23:28.406429 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.406429 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.406556 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 processing appconfig overrides Jun 20 18:23:28.408702 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.408702 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.408702 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 processing appconfig overrides Jun 20 18:23:28.408702 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.4063 INFO Proxy environment variables: Jun 20 18:23:28.413376 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.413376 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:28.413376 amazon-ssm-agent[2195]: 2025/06/20 18:23:28 processing appconfig overrides Jun 20 18:23:28.508138 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.4063 INFO https_proxy: Jun 20 18:23:28.610247 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.4063 INFO http_proxy: Jun 20 18:23:28.709409 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.4063 INFO no_proxy: Jun 20 18:23:28.713039 tar[1999]: linux-arm64/README.md Jun 20 18:23:28.761726 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:23:28.808537 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.4065 INFO Checking if agent identity type OnPrem can be assumed Jun 20 18:23:28.909781 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.4066 INFO Checking if agent identity type EC2 can be assumed Jun 20 18:23:29.008732 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5562 INFO Agent will take identity from EC2 Jun 20 18:23:29.108205 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5612 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jun 20 18:23:29.207605 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5612 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jun 20 18:23:29.307178 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5612 INFO [amazon-ssm-agent] Starting Core Agent Jun 20 18:23:29.313617 sshd_keygen[2028]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:23:29.351515 amazon-ssm-agent[2195]: 2025/06/20 18:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:29.351515 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:23:29.351974 amazon-ssm-agent[2195]: 2025/06/20 18:23:29 processing appconfig overrides Jun 20 18:23:29.375811 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:23:29.382105 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:23:29.387770 systemd[1]: Started sshd@0-172.31.21.135:22-139.178.68.195:33242.service - OpenSSH per-connection server daemon (139.178.68.195:33242). Jun 20 18:23:29.394248 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5612 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jun 20 18:23:29.394449 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5612 INFO [Registrar] Starting registrar module Jun 20 18:23:29.394565 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5649 INFO [EC2Identity] Checking disk for registration info Jun 20 18:23:29.394727 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5649 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jun 20 18:23:29.394876 amazon-ssm-agent[2195]: 2025-06-20 18:23:28.5649 INFO [EC2Identity] Generating registration keypair Jun 20 18:23:29.394876 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.2991 INFO [EC2Identity] Checking write access before registering Jun 20 18:23:29.394876 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.2997 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jun 20 18:23:29.395467 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.3511 INFO [EC2Identity] EC2 registration was successful. Jun 20 18:23:29.395467 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.3512 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jun 20 18:23:29.395467 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.3513 INFO [CredentialRefresher] credentialRefresher has started Jun 20 18:23:29.395467 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.3513 INFO [CredentialRefresher] Starting credentials refresher loop Jun 20 18:23:29.395467 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.3937 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 20 18:23:29.395467 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.3941 INFO [CredentialRefresher] Credentials ready Jun 20 18:23:29.407003 amazon-ssm-agent[2195]: 2025-06-20 18:23:29.3957 INFO [CredentialRefresher] Next credential rotation will be in 29.999967529 minutes Jun 20 18:23:29.409433 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:23:29.411782 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:23:29.421083 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:23:29.462291 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:23:29.469867 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:23:29.477242 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:23:29.481148 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:23:29.630445 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 33242 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:29.633321 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:29.646733 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:23:29.651057 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:23:29.673905 systemd-logind[1989]: New session 1 of user core. Jun 20 18:23:29.695766 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:23:29.707498 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:23:29.728093 (systemd)[2241]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:23:29.733015 systemd-logind[1989]: New session c1 of user core. Jun 20 18:23:30.025163 systemd[2241]: Queued start job for default target default.target. Jun 20 18:23:30.040012 systemd[2241]: Created slice app.slice - User Application Slice. Jun 20 18:23:30.040243 systemd[2241]: Reached target paths.target - Paths. Jun 20 18:23:30.040440 systemd[2241]: Reached target timers.target - Timers. Jun 20 18:23:30.044142 systemd[2241]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:23:30.073427 systemd[2241]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:23:30.073718 systemd[2241]: Reached target sockets.target - Sockets. Jun 20 18:23:30.073953 systemd[2241]: Reached target basic.target - Basic System. Jun 20 18:23:30.074139 systemd[2241]: Reached target default.target - Main User Target. Jun 20 18:23:30.074298 systemd[2241]: Startup finished in 329ms. Jun 20 18:23:30.074620 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:23:30.084952 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:23:30.240079 systemd[1]: Started sshd@1-172.31.21.135:22-139.178.68.195:33258.service - OpenSSH per-connection server daemon (139.178.68.195:33258). Jun 20 18:23:30.425873 amazon-ssm-agent[2195]: 2025-06-20 18:23:30.4254 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 20 18:23:30.441692 sshd[2252]: Accepted publickey for core from 139.178.68.195 port 33258 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:30.447862 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:30.463095 systemd-logind[1989]: New session 2 of user core. Jun 20 18:23:30.468262 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:23:30.526769 amazon-ssm-agent[2195]: 2025-06-20 18:23:30.4316 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2256) started Jun 20 18:23:30.599603 sshd[2260]: Connection closed by 139.178.68.195 port 33258 Jun 20 18:23:30.600797 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:30.611130 systemd[1]: sshd@1-172.31.21.135:22-139.178.68.195:33258.service: Deactivated successfully. Jun 20 18:23:30.615864 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 18:23:30.624820 systemd-logind[1989]: Session 2 logged out. Waiting for processes to exit. Jun 20 18:23:30.629041 amazon-ssm-agent[2195]: 2025-06-20 18:23:30.4316 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 20 18:23:30.642813 systemd[1]: Started sshd@2-172.31.21.135:22-139.178.68.195:33262.service - OpenSSH per-connection server daemon (139.178.68.195:33262). Jun 20 18:23:30.648362 systemd-logind[1989]: Removed session 2. Jun 20 18:23:30.780898 ntpd[1981]: Listen normally on 7 eth0 [fe80::47b:43ff:fe35:82f9%2]:123 Jun 20 18:23:30.781830 ntpd[1981]: 20 Jun 18:23:30 ntpd[1981]: Listen normally on 7 eth0 [fe80::47b:43ff:fe35:82f9%2]:123 Jun 20 18:23:30.896102 sshd[2267]: Accepted publickey for core from 139.178.68.195 port 33262 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:30.898504 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:30.906757 systemd-logind[1989]: New session 3 of user core. Jun 20 18:23:30.913932 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:23:31.013456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:31.019820 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:23:31.023234 systemd[1]: Startup finished in 3.845s (kernel) + 9.177s (initrd) + 10.230s (userspace) = 23.254s. Jun 20 18:23:31.030882 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:31.066052 sshd[2274]: Connection closed by 139.178.68.195 port 33262 Jun 20 18:23:31.066891 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:31.073368 systemd[1]: sshd@2-172.31.21.135:22-139.178.68.195:33262.service: Deactivated successfully. Jun 20 18:23:31.080032 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 18:23:31.082404 systemd-logind[1989]: Session 3 logged out. Waiting for processes to exit. Jun 20 18:23:31.087905 systemd-logind[1989]: Removed session 3. Jun 20 18:23:32.449675 kubelet[2280]: E0620 18:23:32.449571 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:32.454335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:32.455049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:32.455797 systemd[1]: kubelet.service: Consumed 1.350s CPU time, 259.6M memory peak. Jun 20 18:23:33.367540 systemd-resolved[1818]: Clock change detected. Flushing caches. Jun 20 18:23:40.759248 systemd[1]: Started sshd@3-172.31.21.135:22-139.178.68.195:37916.service - OpenSSH per-connection server daemon (139.178.68.195:37916). Jun 20 18:23:40.969984 sshd[2297]: Accepted publickey for core from 139.178.68.195 port 37916 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:40.972489 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:40.980213 systemd-logind[1989]: New session 4 of user core. Jun 20 18:23:40.990585 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:23:41.115926 sshd[2299]: Connection closed by 139.178.68.195 port 37916 Jun 20 18:23:41.116385 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:41.124079 systemd[1]: sshd@3-172.31.21.135:22-139.178.68.195:37916.service: Deactivated successfully. Jun 20 18:23:41.127103 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:23:41.130582 systemd-logind[1989]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:23:41.133607 systemd-logind[1989]: Removed session 4. Jun 20 18:23:41.154757 systemd[1]: Started sshd@4-172.31.21.135:22-139.178.68.195:37920.service - OpenSSH per-connection server daemon (139.178.68.195:37920). Jun 20 18:23:41.369182 sshd[2305]: Accepted publickey for core from 139.178.68.195 port 37920 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:41.371655 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:41.381299 systemd-logind[1989]: New session 5 of user core. Jun 20 18:23:41.388620 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:23:41.510062 sshd[2307]: Connection closed by 139.178.68.195 port 37920 Jun 20 18:23:41.510613 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:41.516993 systemd[1]: sshd@4-172.31.21.135:22-139.178.68.195:37920.service: Deactivated successfully. Jun 20 18:23:41.520780 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:23:41.523222 systemd-logind[1989]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:23:41.525889 systemd-logind[1989]: Removed session 5. Jun 20 18:23:41.545299 systemd[1]: Started sshd@5-172.31.21.135:22-139.178.68.195:37926.service - OpenSSH per-connection server daemon (139.178.68.195:37926). Jun 20 18:23:41.736946 sshd[2313]: Accepted publickey for core from 139.178.68.195 port 37926 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:41.739844 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:41.748419 systemd-logind[1989]: New session 6 of user core. Jun 20 18:23:41.758597 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:23:41.883171 sshd[2315]: Connection closed by 139.178.68.195 port 37926 Jun 20 18:23:41.884002 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:41.890458 systemd[1]: sshd@5-172.31.21.135:22-139.178.68.195:37926.service: Deactivated successfully. Jun 20 18:23:41.894043 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:23:41.895919 systemd-logind[1989]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:23:41.899697 systemd-logind[1989]: Removed session 6. Jun 20 18:23:41.925778 systemd[1]: Started sshd@6-172.31.21.135:22-139.178.68.195:37938.service - OpenSSH per-connection server daemon (139.178.68.195:37938). Jun 20 18:23:42.107368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:23:42.110684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:42.141393 sshd[2321]: Accepted publickey for core from 139.178.68.195 port 37938 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:42.143368 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:42.151792 systemd-logind[1989]: New session 7 of user core. Jun 20 18:23:42.155298 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:23:42.278888 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:23:42.279600 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:42.297891 sudo[2327]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:42.322368 sshd[2326]: Connection closed by 139.178.68.195 port 37938 Jun 20 18:23:42.324812 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:42.333159 systemd[1]: sshd@6-172.31.21.135:22-139.178.68.195:37938.service: Deactivated successfully. Jun 20 18:23:42.334110 systemd-logind[1989]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:23:42.337450 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:23:42.357304 systemd-logind[1989]: Removed session 7. Jun 20 18:23:42.360758 systemd[1]: Started sshd@7-172.31.21.135:22-139.178.68.195:37948.service - OpenSSH per-connection server daemon (139.178.68.195:37948). Jun 20 18:23:42.496707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:42.511817 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:42.570609 sshd[2333]: Accepted publickey for core from 139.178.68.195 port 37948 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:42.573286 sshd-session[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:42.586468 systemd-logind[1989]: New session 8 of user core. Jun 20 18:23:42.591657 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:23:42.596078 kubelet[2340]: E0620 18:23:42.595433 2340 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:42.605720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:42.606042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:42.606754 systemd[1]: kubelet.service: Consumed 315ms CPU time, 105.9M memory peak. Jun 20 18:23:42.698515 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:23:42.699101 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:42.709608 sudo[2349]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:42.718969 sudo[2348]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:23:42.720250 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:42.737995 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:23:42.798672 augenrules[2371]: No rules Jun 20 18:23:42.800836 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:23:42.802414 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:23:42.805374 sudo[2348]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:42.828558 sshd[2346]: Connection closed by 139.178.68.195 port 37948 Jun 20 18:23:42.829277 sshd-session[2333]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:42.835775 systemd[1]: sshd@7-172.31.21.135:22-139.178.68.195:37948.service: Deactivated successfully. Jun 20 18:23:42.838562 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:23:42.841930 systemd-logind[1989]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:23:42.844682 systemd-logind[1989]: Removed session 8. Jun 20 18:23:42.868246 systemd[1]: Started sshd@8-172.31.21.135:22-139.178.68.195:37952.service - OpenSSH per-connection server daemon (139.178.68.195:37952). Jun 20 18:23:43.068613 sshd[2380]: Accepted publickey for core from 139.178.68.195 port 37952 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:23:43.071000 sshd-session[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:43.080427 systemd-logind[1989]: New session 9 of user core. Jun 20 18:23:43.088625 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:23:43.194468 sudo[2383]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:23:43.195077 sudo[2383]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:43.689949 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:23:43.703864 (dockerd)[2400]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:23:44.049671 dockerd[2400]: time="2025-06-20T18:23:44.049201633Z" level=info msg="Starting up" Jun 20 18:23:44.053811 dockerd[2400]: time="2025-06-20T18:23:44.053737945Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 18:23:44.110373 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2747373427-merged.mount: Deactivated successfully. Jun 20 18:23:44.157370 dockerd[2400]: time="2025-06-20T18:23:44.156917341Z" level=info msg="Loading containers: start." Jun 20 18:23:44.171380 kernel: Initializing XFRM netlink socket Jun 20 18:23:44.472526 (udev-worker)[2421]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:23:44.546566 systemd-networkd[1817]: docker0: Link UP Jun 20 18:23:44.556886 dockerd[2400]: time="2025-06-20T18:23:44.556828407Z" level=info msg="Loading containers: done." Jun 20 18:23:44.584838 dockerd[2400]: time="2025-06-20T18:23:44.584770479Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:23:44.585046 dockerd[2400]: time="2025-06-20T18:23:44.584895627Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 18:23:44.585104 dockerd[2400]: time="2025-06-20T18:23:44.585082947Z" level=info msg="Initializing buildkit" Jun 20 18:23:44.632993 dockerd[2400]: time="2025-06-20T18:23:44.632926743Z" level=info msg="Completed buildkit initialization" Jun 20 18:23:44.647499 dockerd[2400]: time="2025-06-20T18:23:44.647246008Z" level=info msg="Daemon has completed initialization" Jun 20 18:23:44.647831 dockerd[2400]: time="2025-06-20T18:23:44.647769052Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:23:44.648094 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:23:45.533188 containerd[2019]: time="2025-06-20T18:23:45.533135932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 18:23:46.184256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82934383.mount: Deactivated successfully. Jun 20 18:23:47.502516 containerd[2019]: time="2025-06-20T18:23:47.501678942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:47.503971 containerd[2019]: time="2025-06-20T18:23:47.503474550Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jun 20 18:23:47.506164 containerd[2019]: time="2025-06-20T18:23:47.506110086Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:47.511720 containerd[2019]: time="2025-06-20T18:23:47.511669470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:47.513485 containerd[2019]: time="2025-06-20T18:23:47.513441522Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.980240418s" Jun 20 18:23:47.513656 containerd[2019]: time="2025-06-20T18:23:47.513626466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jun 20 18:23:47.516723 containerd[2019]: time="2025-06-20T18:23:47.516568578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 18:23:48.878378 containerd[2019]: time="2025-06-20T18:23:48.877984077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:48.881232 containerd[2019]: time="2025-06-20T18:23:48.881181285Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jun 20 18:23:48.883296 containerd[2019]: time="2025-06-20T18:23:48.883232013Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:48.897695 containerd[2019]: time="2025-06-20T18:23:48.897598065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:48.899003 containerd[2019]: time="2025-06-20T18:23:48.898782573Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.381835623s" Jun 20 18:23:48.899003 containerd[2019]: time="2025-06-20T18:23:48.898838997Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jun 20 18:23:48.899516 containerd[2019]: time="2025-06-20T18:23:48.899467425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 18:23:50.154732 containerd[2019]: time="2025-06-20T18:23:50.154654303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:50.156673 containerd[2019]: time="2025-06-20T18:23:50.156604555Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jun 20 18:23:50.159500 containerd[2019]: time="2025-06-20T18:23:50.159431083Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:50.164753 containerd[2019]: time="2025-06-20T18:23:50.164655931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:50.166609 containerd[2019]: time="2025-06-20T18:23:50.166397011Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.26687399s" Jun 20 18:23:50.166609 containerd[2019]: time="2025-06-20T18:23:50.166449559Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jun 20 18:23:50.167392 containerd[2019]: time="2025-06-20T18:23:50.167246575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 18:23:51.443755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354157918.mount: Deactivated successfully. Jun 20 18:23:52.109252 containerd[2019]: time="2025-06-20T18:23:52.109171557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:52.111407 containerd[2019]: time="2025-06-20T18:23:52.111322377Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jun 20 18:23:52.114030 containerd[2019]: time="2025-06-20T18:23:52.113955537Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:52.118299 containerd[2019]: time="2025-06-20T18:23:52.118224609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:52.119298 containerd[2019]: time="2025-06-20T18:23:52.119243217Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.95194407s" Jun 20 18:23:52.119422 containerd[2019]: time="2025-06-20T18:23:52.119297865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jun 20 18:23:52.120135 containerd[2019]: time="2025-06-20T18:23:52.120059097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 18:23:52.667445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:23:52.672680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:52.707670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373565535.mount: Deactivated successfully. Jun 20 18:23:53.105591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:53.127532 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:53.237831 kubelet[2693]: E0620 18:23:53.237730 2693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:53.243155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:53.244673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:53.246645 systemd[1]: kubelet.service: Consumed 322ms CPU time, 107.4M memory peak. Jun 20 18:23:54.048221 containerd[2019]: time="2025-06-20T18:23:54.048137038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:54.053959 containerd[2019]: time="2025-06-20T18:23:54.053893654Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jun 20 18:23:54.061046 containerd[2019]: time="2025-06-20T18:23:54.060985390Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:54.074697 containerd[2019]: time="2025-06-20T18:23:54.074587198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:54.077275 containerd[2019]: time="2025-06-20T18:23:54.076812994Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.956697677s" Jun 20 18:23:54.077275 containerd[2019]: time="2025-06-20T18:23:54.076873786Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jun 20 18:23:54.078779 containerd[2019]: time="2025-06-20T18:23:54.077759938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:23:54.604520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761771131.mount: Deactivated successfully. Jun 20 18:23:54.615361 containerd[2019]: time="2025-06-20T18:23:54.614330569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:54.617897 containerd[2019]: time="2025-06-20T18:23:54.617859061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:23:54.620544 containerd[2019]: time="2025-06-20T18:23:54.620487193Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:54.624668 containerd[2019]: time="2025-06-20T18:23:54.624608005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:54.626062 containerd[2019]: time="2025-06-20T18:23:54.626022001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 548.204619ms" Jun 20 18:23:54.626209 containerd[2019]: time="2025-06-20T18:23:54.626181997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:23:54.627121 containerd[2019]: time="2025-06-20T18:23:54.627054433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 18:23:55.161509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984023618.mount: Deactivated successfully. Jun 20 18:23:57.078810 containerd[2019]: time="2025-06-20T18:23:57.078751693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:57.081555 containerd[2019]: time="2025-06-20T18:23:57.081512269Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jun 20 18:23:57.084207 containerd[2019]: time="2025-06-20T18:23:57.084136057Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:57.089702 containerd[2019]: time="2025-06-20T18:23:57.089622793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:57.091798 containerd[2019]: time="2025-06-20T18:23:57.091582525Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.46430866s" Jun 20 18:23:57.091798 containerd[2019]: time="2025-06-20T18:23:57.091641109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jun 20 18:23:57.666062 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 18:24:03.245361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:24:03.246689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:24:03.247162 systemd[1]: kubelet.service: Consumed 322ms CPU time, 107.4M memory peak. Jun 20 18:24:03.251193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:24:03.299395 systemd[1]: Reload requested from client PID 2831 ('systemctl') (unit session-9.scope)... Jun 20 18:24:03.299420 systemd[1]: Reloading... Jun 20 18:24:03.539476 zram_generator::config[2878]: No configuration found. Jun 20 18:24:03.740605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:24:03.995641 systemd[1]: Reloading finished in 695 ms. Jun 20 18:24:04.122322 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 18:24:04.122769 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 18:24:04.123402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:24:04.123479 systemd[1]: kubelet.service: Consumed 220ms CPU time, 95M memory peak. Jun 20 18:24:04.126835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:24:04.461022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:24:04.477156 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:24:04.547718 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:24:04.547718 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:24:04.547718 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:24:04.548257 kubelet[2938]: I0620 18:24:04.547806 2938 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:24:06.740823 kubelet[2938]: I0620 18:24:06.740757 2938 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:24:06.740823 kubelet[2938]: I0620 18:24:06.740805 2938 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:24:06.741554 kubelet[2938]: I0620 18:24:06.741169 2938 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:24:06.781457 kubelet[2938]: E0620 18:24:06.781384 2938 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:24:06.784984 kubelet[2938]: I0620 18:24:06.784727 2938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:24:06.797076 kubelet[2938]: I0620 18:24:06.797034 2938 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:24:06.807551 kubelet[2938]: I0620 18:24:06.807504 2938 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:24:06.808005 kubelet[2938]: I0620 18:24:06.807955 2938 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:24:06.808324 kubelet[2938]: I0620 18:24:06.808006 2938 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-135","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:24:06.808518 kubelet[2938]: I0620 18:24:06.808418 2938 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:24:06.808518 kubelet[2938]: I0620 18:24:06.808441 2938 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:24:06.810202 kubelet[2938]: I0620 18:24:06.810154 2938 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:06.816283 kubelet[2938]: I0620 18:24:06.816223 2938 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:24:06.816283 kubelet[2938]: I0620 18:24:06.816266 2938 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:24:06.820059 kubelet[2938]: I0620 18:24:06.820005 2938 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:24:06.822352 kubelet[2938]: I0620 18:24:06.822251 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:24:06.825630 kubelet[2938]: E0620 18:24:06.825580 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-135&limit=500&resourceVersion=0\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:24:06.825976 kubelet[2938]: I0620 18:24:06.825948 2938 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:24:06.827044 kubelet[2938]: I0620 18:24:06.827012 2938 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:24:06.827269 kubelet[2938]: W0620 18:24:06.827248 2938 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:24:06.834442 kubelet[2938]: I0620 18:24:06.834063 2938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:24:06.834442 kubelet[2938]: I0620 18:24:06.834128 2938 server.go:1289] "Started kubelet" Jun 20 18:24:06.837567 kubelet[2938]: E0620 18:24:06.837496 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:24:06.837707 kubelet[2938]: I0620 18:24:06.837612 2938 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:24:06.843388 kubelet[2938]: I0620 18:24:06.841031 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:24:06.843388 kubelet[2938]: I0620 18:24:06.841333 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:24:06.843388 kubelet[2938]: I0620 18:24:06.841477 2938 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:24:06.849224 kubelet[2938]: E0620 18:24:06.846316 2938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.135:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-135.184ad36789d06aa2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-135,UID:ip-172-31-21-135,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-135,},FirstTimestamp:2025-06-20 18:24:06.83408861 +0000 UTC m=+2.349833581,LastTimestamp:2025-06-20 18:24:06.83408861 +0000 UTC m=+2.349833581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-135,}" Jun 20 18:24:06.851401 kubelet[2938]: I0620 18:24:06.851333 2938 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:24:06.853266 kubelet[2938]: I0620 18:24:06.853222 2938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:24:06.853852 kubelet[2938]: I0620 18:24:06.853817 2938 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:24:06.854220 kubelet[2938]: E0620 18:24:06.854174 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-135\" not found" Jun 20 18:24:06.854725 kubelet[2938]: I0620 18:24:06.854689 2938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:24:06.854825 kubelet[2938]: I0620 18:24:06.854787 2938 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:24:06.858766 kubelet[2938]: E0620 18:24:06.858680 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:24:06.858924 kubelet[2938]: E0620 18:24:06.858820 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-135?timeout=10s\": dial tcp 172.31.21.135:6443: connect: connection refused" interval="200ms" Jun 20 18:24:06.859359 kubelet[2938]: I0620 18:24:06.859255 2938 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:24:06.859503 kubelet[2938]: I0620 18:24:06.859463 2938 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:24:06.862215 kubelet[2938]: E0620 18:24:06.862178 2938 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:24:06.864761 kubelet[2938]: I0620 18:24:06.864728 2938 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:24:06.899476 kubelet[2938]: I0620 18:24:06.899330 2938 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:24:06.899476 kubelet[2938]: I0620 18:24:06.899423 2938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:24:06.899476 kubelet[2938]: I0620 18:24:06.899456 2938 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:06.900877 kubelet[2938]: I0620 18:24:06.900624 2938 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:24:06.902817 kubelet[2938]: I0620 18:24:06.902779 2938 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:24:06.904055 kubelet[2938]: I0620 18:24:06.902919 2938 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:24:06.904055 kubelet[2938]: I0620 18:24:06.902956 2938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:24:06.904055 kubelet[2938]: I0620 18:24:06.902971 2938 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:24:06.904055 kubelet[2938]: E0620 18:24:06.903035 2938 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:24:06.906026 kubelet[2938]: I0620 18:24:06.905624 2938 policy_none.go:49] "None policy: Start" Jun 20 18:24:06.906026 kubelet[2938]: I0620 18:24:06.905661 2938 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:24:06.906026 kubelet[2938]: I0620 18:24:06.905683 2938 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:24:06.912566 kubelet[2938]: E0620 18:24:06.912522 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:24:06.921839 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:24:06.938286 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:24:06.945861 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:24:06.955318 kubelet[2938]: E0620 18:24:06.955267 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-135\" not found" Jun 20 18:24:06.958479 kubelet[2938]: E0620 18:24:06.958048 2938 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:24:06.960445 kubelet[2938]: I0620 18:24:06.959273 2938 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:24:06.960575 kubelet[2938]: I0620 18:24:06.960451 2938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:24:06.962048 kubelet[2938]: I0620 18:24:06.961223 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:24:06.964182 kubelet[2938]: E0620 18:24:06.964146 2938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:24:06.964751 kubelet[2938]: E0620 18:24:06.964710 2938 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-135\" not found" Jun 20 18:24:07.032786 systemd[1]: Created slice kubepods-burstable-pod9252acccb8917b13bc972b1bdf02bbe1.slice - libcontainer container kubepods-burstable-pod9252acccb8917b13bc972b1bdf02bbe1.slice. Jun 20 18:24:07.050981 kubelet[2938]: E0620 18:24:07.050926 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:07.058716 systemd[1]: Created slice kubepods-burstable-pod157e996923d2fb96d2fe2eedbd33ccc0.slice - libcontainer container kubepods-burstable-pod157e996923d2fb96d2fe2eedbd33ccc0.slice. Jun 20 18:24:07.060626 kubelet[2938]: E0620 18:24:07.060480 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-135?timeout=10s\": dial tcp 172.31.21.135:6443: connect: connection refused" interval="400ms" Jun 20 18:24:07.065080 kubelet[2938]: I0620 18:24:07.065018 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-135" Jun 20 18:24:07.066386 kubelet[2938]: E0620 18:24:07.066248 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:07.067677 kubelet[2938]: E0620 18:24:07.067621 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.135:6443/api/v1/nodes\": dial tcp 172.31.21.135:6443: connect: connection refused" node="ip-172-31-21-135" Jun 20 18:24:07.070502 systemd[1]: Created slice kubepods-burstable-pod2711326a59d488813bb7a8b352595ae5.slice - libcontainer container kubepods-burstable-pod2711326a59d488813bb7a8b352595ae5.slice. Jun 20 18:24:07.074940 kubelet[2938]: E0620 18:24:07.074898 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:07.157356 kubelet[2938]: I0620 18:24:07.157253 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9252acccb8917b13bc972b1bdf02bbe1-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-135\" (UID: \"9252acccb8917b13bc972b1bdf02bbe1\") " pod="kube-system/kube-scheduler-ip-172-31-21-135" Jun 20 18:24:07.157758 kubelet[2938]: I0620 18:24:07.157603 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/157e996923d2fb96d2fe2eedbd33ccc0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-135\" (UID: \"157e996923d2fb96d2fe2eedbd33ccc0\") " pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:07.157758 kubelet[2938]: I0620 18:24:07.157703 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:07.158065 kubelet[2938]: I0620 18:24:07.157904 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:07.158065 kubelet[2938]: I0620 18:24:07.157986 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/157e996923d2fb96d2fe2eedbd33ccc0-ca-certs\") pod \"kube-apiserver-ip-172-31-21-135\" (UID: \"157e996923d2fb96d2fe2eedbd33ccc0\") " pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:07.158269 kubelet[2938]: I0620 18:24:07.158025 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/157e996923d2fb96d2fe2eedbd33ccc0-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-135\" (UID: \"157e996923d2fb96d2fe2eedbd33ccc0\") " pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:07.158457 kubelet[2938]: I0620 18:24:07.158234 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:07.158457 kubelet[2938]: I0620 18:24:07.158422 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:07.158692 kubelet[2938]: I0620 18:24:07.158621 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:07.271173 kubelet[2938]: I0620 18:24:07.271100 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-135" Jun 20 18:24:07.271830 kubelet[2938]: E0620 18:24:07.271750 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.135:6443/api/v1/nodes\": dial tcp 172.31.21.135:6443: connect: connection refused" node="ip-172-31-21-135" Jun 20 18:24:07.352579 containerd[2019]: time="2025-06-20T18:24:07.352429248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-135,Uid:9252acccb8917b13bc972b1bdf02bbe1,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:07.369450 containerd[2019]: time="2025-06-20T18:24:07.369042768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-135,Uid:157e996923d2fb96d2fe2eedbd33ccc0,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:07.377630 containerd[2019]: time="2025-06-20T18:24:07.377582748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-135,Uid:2711326a59d488813bb7a8b352595ae5,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:07.425365 containerd[2019]: time="2025-06-20T18:24:07.425077657Z" level=info msg="connecting to shim 85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9" address="unix:///run/containerd/s/cb7d31111904d8f2c972b93e94caf2cfb76e582d976bd4336457239fc3ae88e7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:07.464546 kubelet[2938]: E0620 18:24:07.463001 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-135?timeout=10s\": dial tcp 172.31.21.135:6443: connect: connection refused" interval="800ms" Jun 20 18:24:07.481121 containerd[2019]: time="2025-06-20T18:24:07.481063429Z" level=info msg="connecting to shim 8dd163e1eedc7a3dc2237072936a39831348e6d3165588f4e109a85399eaa6fd" address="unix:///run/containerd/s/db1b760cb74eb7bf36fc510da7fcae9607d003979395a0331ee6e3bed8605f39" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:07.482691 containerd[2019]: time="2025-06-20T18:24:07.482637313Z" level=info msg="connecting to shim af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b" address="unix:///run/containerd/s/3da8125aa6f0057d6a80e60d049035b3c3a00a573c46098f9c19976dd9d409fe" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:07.510730 systemd[1]: Started cri-containerd-85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9.scope - libcontainer container 85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9. Jun 20 18:24:07.582596 systemd[1]: Started cri-containerd-8dd163e1eedc7a3dc2237072936a39831348e6d3165588f4e109a85399eaa6fd.scope - libcontainer container 8dd163e1eedc7a3dc2237072936a39831348e6d3165588f4e109a85399eaa6fd. Jun 20 18:24:07.586053 systemd[1]: Started cri-containerd-af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b.scope - libcontainer container af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b. Jun 20 18:24:07.680006 kubelet[2938]: I0620 18:24:07.679878 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-135" Jun 20 18:24:07.681234 kubelet[2938]: E0620 18:24:07.681185 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.135:6443/api/v1/nodes\": dial tcp 172.31.21.135:6443: connect: connection refused" node="ip-172-31-21-135" Jun 20 18:24:07.707104 containerd[2019]: time="2025-06-20T18:24:07.706928978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-135,Uid:157e996923d2fb96d2fe2eedbd33ccc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dd163e1eedc7a3dc2237072936a39831348e6d3165588f4e109a85399eaa6fd\"" Jun 20 18:24:07.718844 containerd[2019]: time="2025-06-20T18:24:07.718796018Z" level=info msg="CreateContainer within sandbox \"8dd163e1eedc7a3dc2237072936a39831348e6d3165588f4e109a85399eaa6fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:24:07.724664 containerd[2019]: time="2025-06-20T18:24:07.724608098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-135,Uid:9252acccb8917b13bc972b1bdf02bbe1,Namespace:kube-system,Attempt:0,} returns sandbox id \"85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9\"" Jun 20 18:24:07.733401 containerd[2019]: time="2025-06-20T18:24:07.733311842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-135,Uid:2711326a59d488813bb7a8b352595ae5,Namespace:kube-system,Attempt:0,} returns sandbox id \"af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b\"" Jun 20 18:24:07.740635 containerd[2019]: time="2025-06-20T18:24:07.740418038Z" level=info msg="CreateContainer within sandbox \"85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:24:07.745381 containerd[2019]: time="2025-06-20T18:24:07.744880346Z" level=info msg="Container 516559aeadc8a619efde7a68b3cee9bb636bf294bbbc460536a5bdf37112b2dc: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:07.748002 kubelet[2938]: E0620 18:24:07.747926 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:24:07.749083 containerd[2019]: time="2025-06-20T18:24:07.749039882Z" level=info msg="CreateContainer within sandbox \"af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:24:07.767542 containerd[2019]: time="2025-06-20T18:24:07.767328026Z" level=info msg="CreateContainer within sandbox \"8dd163e1eedc7a3dc2237072936a39831348e6d3165588f4e109a85399eaa6fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"516559aeadc8a619efde7a68b3cee9bb636bf294bbbc460536a5bdf37112b2dc\"" Jun 20 18:24:07.768492 containerd[2019]: time="2025-06-20T18:24:07.768415586Z" level=info msg="StartContainer for \"516559aeadc8a619efde7a68b3cee9bb636bf294bbbc460536a5bdf37112b2dc\"" Jun 20 18:24:07.771436 containerd[2019]: time="2025-06-20T18:24:07.771385742Z" level=info msg="connecting to shim 516559aeadc8a619efde7a68b3cee9bb636bf294bbbc460536a5bdf37112b2dc" address="unix:///run/containerd/s/db1b760cb74eb7bf36fc510da7fcae9607d003979395a0331ee6e3bed8605f39" protocol=ttrpc version=3 Jun 20 18:24:07.773748 containerd[2019]: time="2025-06-20T18:24:07.773626634Z" level=info msg="Container 9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:07.784083 containerd[2019]: time="2025-06-20T18:24:07.783956918Z" level=info msg="Container 5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:07.795729 containerd[2019]: time="2025-06-20T18:24:07.795562047Z" level=info msg="CreateContainer within sandbox \"85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67\"" Jun 20 18:24:07.796673 containerd[2019]: time="2025-06-20T18:24:07.796634115Z" level=info msg="StartContainer for \"9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67\"" Jun 20 18:24:07.799726 containerd[2019]: time="2025-06-20T18:24:07.799524771Z" level=info msg="connecting to shim 9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67" address="unix:///run/containerd/s/cb7d31111904d8f2c972b93e94caf2cfb76e582d976bd4336457239fc3ae88e7" protocol=ttrpc version=3 Jun 20 18:24:07.807616 containerd[2019]: time="2025-06-20T18:24:07.807205599Z" level=info msg="CreateContainer within sandbox \"af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff\"" Jun 20 18:24:07.808857 containerd[2019]: time="2025-06-20T18:24:07.808810359Z" level=info msg="StartContainer for \"5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff\"" Jun 20 18:24:07.819998 containerd[2019]: time="2025-06-20T18:24:07.819887247Z" level=info msg="connecting to shim 5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff" address="unix:///run/containerd/s/3da8125aa6f0057d6a80e60d049035b3c3a00a573c46098f9c19976dd9d409fe" protocol=ttrpc version=3 Jun 20 18:24:07.820894 systemd[1]: Started cri-containerd-516559aeadc8a619efde7a68b3cee9bb636bf294bbbc460536a5bdf37112b2dc.scope - libcontainer container 516559aeadc8a619efde7a68b3cee9bb636bf294bbbc460536a5bdf37112b2dc. Jun 20 18:24:07.823960 kubelet[2938]: E0620 18:24:07.823488 2938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.135:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-135.184ad36789d06aa2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-135,UID:ip-172-31-21-135,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-135,},FirstTimestamp:2025-06-20 18:24:06.83408861 +0000 UTC m=+2.349833581,LastTimestamp:2025-06-20 18:24:06.83408861 +0000 UTC m=+2.349833581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-135,}" Jun 20 18:24:07.858113 systemd[1]: Started cri-containerd-9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67.scope - libcontainer container 9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67. Jun 20 18:24:07.879607 systemd[1]: Started cri-containerd-5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff.scope - libcontainer container 5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff. Jun 20 18:24:08.013988 kubelet[2938]: E0620 18:24:08.013844 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:24:08.034563 containerd[2019]: time="2025-06-20T18:24:08.034500396Z" level=info msg="StartContainer for \"9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67\" returns successfully" Jun 20 18:24:08.043809 containerd[2019]: time="2025-06-20T18:24:08.043752480Z" level=info msg="StartContainer for \"516559aeadc8a619efde7a68b3cee9bb636bf294bbbc460536a5bdf37112b2dc\" returns successfully" Jun 20 18:24:08.062599 kubelet[2938]: E0620 18:24:08.062530 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:24:08.070689 containerd[2019]: time="2025-06-20T18:24:08.070629396Z" level=info msg="StartContainer for \"5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff\" returns successfully" Jun 20 18:24:08.485413 kubelet[2938]: I0620 18:24:08.484863 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-135" Jun 20 18:24:08.960967 kubelet[2938]: E0620 18:24:08.960910 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:08.964435 kubelet[2938]: E0620 18:24:08.964123 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:08.971600 kubelet[2938]: E0620 18:24:08.971544 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:09.974195 kubelet[2938]: E0620 18:24:09.973978 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:09.976664 kubelet[2938]: E0620 18:24:09.975213 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:09.977290 kubelet[2938]: E0620 18:24:09.977015 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:10.977278 kubelet[2938]: E0620 18:24:10.977003 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:10.979170 kubelet[2938]: E0620 18:24:10.978066 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:11.418487 update_engine[1990]: I20250620 18:24:11.416477 1990 update_attempter.cc:509] Updating boot flags... Jun 20 18:24:12.029366 kubelet[2938]: E0620 18:24:12.029081 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:12.653229 kubelet[2938]: E0620 18:24:12.653183 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:13.205570 kubelet[2938]: E0620 18:24:13.205515 2938 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-135\" not found" node="ip-172-31-21-135" Jun 20 18:24:13.275135 kubelet[2938]: I0620 18:24:13.275066 2938 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-135" Jun 20 18:24:13.355222 kubelet[2938]: I0620 18:24:13.355153 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-135" Jun 20 18:24:13.449282 kubelet[2938]: E0620 18:24:13.449210 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-135\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-135" Jun 20 18:24:13.449282 kubelet[2938]: I0620 18:24:13.449274 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:13.453221 kubelet[2938]: E0620 18:24:13.453150 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-135\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:13.453221 kubelet[2938]: I0620 18:24:13.453215 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:13.464080 kubelet[2938]: E0620 18:24:13.463920 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-135\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:13.841714 kubelet[2938]: I0620 18:24:13.841659 2938 apiserver.go:52] "Watching apiserver" Jun 20 18:24:13.855301 kubelet[2938]: I0620 18:24:13.855254 2938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:24:15.467833 systemd[1]: Reload requested from client PID 3402 ('systemctl') (unit session-9.scope)... Jun 20 18:24:15.467862 systemd[1]: Reloading... Jun 20 18:24:15.697425 zram_generator::config[3458]: No configuration found. Jun 20 18:24:15.929106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:24:15.950588 kubelet[2938]: I0620 18:24:15.950544 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:16.253957 systemd[1]: Reloading finished in 785 ms. Jun 20 18:24:16.302385 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:24:16.320421 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:24:16.320917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:24:16.321018 systemd[1]: kubelet.service: Consumed 3.100s CPU time, 131.8M memory peak. Jun 20 18:24:16.325120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:24:16.674933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:24:16.690213 (kubelet)[3506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:24:16.821399 kubelet[3506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:24:16.821399 kubelet[3506]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:24:16.821399 kubelet[3506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:24:16.821399 kubelet[3506]: I0620 18:24:16.820565 3506 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:24:16.832921 kubelet[3506]: I0620 18:24:16.832874 3506 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:24:16.833121 kubelet[3506]: I0620 18:24:16.833099 3506 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:24:16.834046 kubelet[3506]: I0620 18:24:16.834010 3506 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:24:16.836670 kubelet[3506]: I0620 18:24:16.836634 3506 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 18:24:16.849774 kubelet[3506]: I0620 18:24:16.849727 3506 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:24:16.862764 kubelet[3506]: I0620 18:24:16.862718 3506 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:24:16.868370 kubelet[3506]: I0620 18:24:16.868299 3506 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:24:16.869274 kubelet[3506]: I0620 18:24:16.868988 3506 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:24:16.869496 kubelet[3506]: I0620 18:24:16.869034 3506 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-135","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:24:16.869852 kubelet[3506]: I0620 18:24:16.869825 3506 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:24:16.869954 kubelet[3506]: I0620 18:24:16.869938 3506 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:24:16.870109 kubelet[3506]: I0620 18:24:16.870091 3506 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:16.870491 kubelet[3506]: I0620 18:24:16.870460 3506 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:24:16.870618 kubelet[3506]: I0620 18:24:16.870598 3506 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:24:16.871466 kubelet[3506]: I0620 18:24:16.871439 3506 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:24:16.871650 kubelet[3506]: I0620 18:24:16.871629 3506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:24:16.885363 kubelet[3506]: I0620 18:24:16.885280 3506 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:24:16.888554 kubelet[3506]: I0620 18:24:16.888508 3506 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:24:16.897582 kubelet[3506]: I0620 18:24:16.897456 3506 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:24:16.898026 kubelet[3506]: I0620 18:24:16.897993 3506 server.go:1289] "Started kubelet" Jun 20 18:24:16.902766 kubelet[3506]: I0620 18:24:16.902521 3506 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:24:16.904717 kubelet[3506]: I0620 18:24:16.904352 3506 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:24:16.907794 kubelet[3506]: I0620 18:24:16.907643 3506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:24:16.908926 kubelet[3506]: I0620 18:24:16.908878 3506 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:24:16.930679 kubelet[3506]: I0620 18:24:16.927169 3506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:24:16.943686 kubelet[3506]: I0620 18:24:16.943629 3506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:24:16.950604 kubelet[3506]: I0620 18:24:16.950554 3506 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:24:16.951619 kubelet[3506]: E0620 18:24:16.951579 3506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-135\" not found" Jun 20 18:24:16.955122 kubelet[3506]: I0620 18:24:16.955036 3506 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:24:16.978929 kubelet[3506]: I0620 18:24:16.978605 3506 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:24:16.984208 kubelet[3506]: I0620 18:24:16.984149 3506 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:24:16.984797 kubelet[3506]: I0620 18:24:16.984450 3506 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:24:16.987949 kubelet[3506]: E0620 18:24:16.987598 3506 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:24:16.992445 kubelet[3506]: I0620 18:24:16.992196 3506 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:24:17.028595 kubelet[3506]: I0620 18:24:17.028494 3506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:24:17.039070 kubelet[3506]: I0620 18:24:17.038885 3506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:24:17.039070 kubelet[3506]: I0620 18:24:17.038978 3506 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:24:17.039070 kubelet[3506]: I0620 18:24:17.039050 3506 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:24:17.039070 kubelet[3506]: I0620 18:24:17.039066 3506 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:24:17.039405 kubelet[3506]: E0620 18:24:17.039207 3506 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117535 3506 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117566 3506 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117600 3506 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117809 3506 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117827 3506 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117858 3506 policy_none.go:49] "None policy: Start" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117875 3506 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.117893 3506 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:24:17.118363 kubelet[3506]: I0620 18:24:17.118072 3506 state_mem.go:75] "Updated machine memory state" Jun 20 18:24:17.133142 kubelet[3506]: E0620 18:24:17.133097 3506 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:24:17.134785 kubelet[3506]: I0620 18:24:17.134324 3506 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:24:17.135070 kubelet[3506]: I0620 18:24:17.134668 3506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:24:17.135912 kubelet[3506]: I0620 18:24:17.135697 3506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:24:17.141325 kubelet[3506]: I0620 18:24:17.141037 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-135" Jun 20 18:24:17.141816 kubelet[3506]: E0620 18:24:17.141769 3506 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:24:17.144265 kubelet[3506]: I0620 18:24:17.143125 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:17.144265 kubelet[3506]: I0620 18:24:17.143784 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:17.180898 kubelet[3506]: I0620 18:24:17.180757 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:17.180898 kubelet[3506]: I0620 18:24:17.180826 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:17.180898 kubelet[3506]: I0620 18:24:17.180868 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:17.181133 kubelet[3506]: I0620 18:24:17.180911 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/157e996923d2fb96d2fe2eedbd33ccc0-ca-certs\") pod \"kube-apiserver-ip-172-31-21-135\" (UID: \"157e996923d2fb96d2fe2eedbd33ccc0\") " pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:17.181133 kubelet[3506]: I0620 18:24:17.180950 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:17.181133 kubelet[3506]: I0620 18:24:17.180988 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9252acccb8917b13bc972b1bdf02bbe1-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-135\" (UID: \"9252acccb8917b13bc972b1bdf02bbe1\") " pod="kube-system/kube-scheduler-ip-172-31-21-135" Jun 20 18:24:17.181133 kubelet[3506]: I0620 18:24:17.181062 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/157e996923d2fb96d2fe2eedbd33ccc0-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-135\" (UID: \"157e996923d2fb96d2fe2eedbd33ccc0\") " pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:17.181330 kubelet[3506]: I0620 18:24:17.181122 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/157e996923d2fb96d2fe2eedbd33ccc0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-135\" (UID: \"157e996923d2fb96d2fe2eedbd33ccc0\") " pod="kube-system/kube-apiserver-ip-172-31-21-135" Jun 20 18:24:17.181330 kubelet[3506]: I0620 18:24:17.181181 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2711326a59d488813bb7a8b352595ae5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-135\" (UID: \"2711326a59d488813bb7a8b352595ae5\") " pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:17.185597 kubelet[3506]: E0620 18:24:17.185541 3506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-135\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-135" Jun 20 18:24:17.266215 kubelet[3506]: I0620 18:24:17.266140 3506 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-135" Jun 20 18:24:17.284490 kubelet[3506]: I0620 18:24:17.284323 3506 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-135" Jun 20 18:24:17.284490 kubelet[3506]: I0620 18:24:17.284460 3506 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-135" Jun 20 18:24:17.872754 kubelet[3506]: I0620 18:24:17.872628 3506 apiserver.go:52] "Watching apiserver" Jun 20 18:24:17.955827 kubelet[3506]: I0620 18:24:17.955759 3506 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:24:18.243317 kubelet[3506]: I0620 18:24:18.242917 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-135" podStartSLOduration=1.242893618 podStartE2EDuration="1.242893618s" podCreationTimestamp="2025-06-20 18:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:18.193696834 +0000 UTC m=+1.492277948" watchObservedRunningTime="2025-06-20 18:24:18.242893618 +0000 UTC m=+1.541474720" Jun 20 18:24:18.284816 kubelet[3506]: I0620 18:24:18.284727 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-135" podStartSLOduration=1.284704535 podStartE2EDuration="1.284704535s" podCreationTimestamp="2025-06-20 18:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:18.245108002 +0000 UTC m=+1.543689116" watchObservedRunningTime="2025-06-20 18:24:18.284704535 +0000 UTC m=+1.583285649" Jun 20 18:24:18.328243 kubelet[3506]: I0620 18:24:18.326882 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-135" podStartSLOduration=3.326859155 podStartE2EDuration="3.326859155s" podCreationTimestamp="2025-06-20 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:18.285126143 +0000 UTC m=+1.583707257" watchObservedRunningTime="2025-06-20 18:24:18.326859155 +0000 UTC m=+1.625440269" Jun 20 18:24:20.584130 kubelet[3506]: I0620 18:24:20.584064 3506 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:24:20.585393 containerd[2019]: time="2025-06-20T18:24:20.585277442Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:24:20.587109 kubelet[3506]: I0620 18:24:20.585806 3506 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:24:21.358990 systemd[1]: Created slice kubepods-besteffort-pod34c44d33_a163_4e36_b0b6_285fddf92c82.slice - libcontainer container kubepods-besteffort-pod34c44d33_a163_4e36_b0b6_285fddf92c82.slice. Jun 20 18:24:21.409809 kubelet[3506]: I0620 18:24:21.409685 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hn9d\" (UniqueName: \"kubernetes.io/projected/34c44d33-a163-4e36-b0b6-285fddf92c82-kube-api-access-2hn9d\") pod \"kube-proxy-t4lsp\" (UID: \"34c44d33-a163-4e36-b0b6-285fddf92c82\") " pod="kube-system/kube-proxy-t4lsp" Jun 20 18:24:21.409954 kubelet[3506]: I0620 18:24:21.409852 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c44d33-a163-4e36-b0b6-285fddf92c82-xtables-lock\") pod \"kube-proxy-t4lsp\" (UID: \"34c44d33-a163-4e36-b0b6-285fddf92c82\") " pod="kube-system/kube-proxy-t4lsp" Jun 20 18:24:21.410037 kubelet[3506]: I0620 18:24:21.409955 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34c44d33-a163-4e36-b0b6-285fddf92c82-kube-proxy\") pod \"kube-proxy-t4lsp\" (UID: \"34c44d33-a163-4e36-b0b6-285fddf92c82\") " pod="kube-system/kube-proxy-t4lsp" Jun 20 18:24:21.410111 kubelet[3506]: I0620 18:24:21.410039 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c44d33-a163-4e36-b0b6-285fddf92c82-lib-modules\") pod \"kube-proxy-t4lsp\" (UID: \"34c44d33-a163-4e36-b0b6-285fddf92c82\") " pod="kube-system/kube-proxy-t4lsp" Jun 20 18:24:21.524246 kubelet[3506]: E0620 18:24:21.524154 3506 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 18:24:21.524246 kubelet[3506]: E0620 18:24:21.524200 3506 projected.go:194] Error preparing data for projected volume kube-api-access-2hn9d for pod kube-system/kube-proxy-t4lsp: configmap "kube-root-ca.crt" not found Jun 20 18:24:21.524508 kubelet[3506]: E0620 18:24:21.524318 3506 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34c44d33-a163-4e36-b0b6-285fddf92c82-kube-api-access-2hn9d podName:34c44d33-a163-4e36-b0b6-285fddf92c82 nodeName:}" failed. No retries permitted until 2025-06-20 18:24:22.024282655 +0000 UTC m=+5.322863757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2hn9d" (UniqueName: "kubernetes.io/projected/34c44d33-a163-4e36-b0b6-285fddf92c82-kube-api-access-2hn9d") pod "kube-proxy-t4lsp" (UID: "34c44d33-a163-4e36-b0b6-285fddf92c82") : configmap "kube-root-ca.crt" not found Jun 20 18:24:21.757191 systemd[1]: Created slice kubepods-besteffort-poda108c235_2a29_4475_9bdf_9df7e15f7776.slice - libcontainer container kubepods-besteffort-poda108c235_2a29_4475_9bdf_9df7e15f7776.slice. Jun 20 18:24:21.812622 kubelet[3506]: I0620 18:24:21.812539 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqtw\" (UniqueName: \"kubernetes.io/projected/a108c235-2a29-4475-9bdf-9df7e15f7776-kube-api-access-8xqtw\") pod \"tigera-operator-68f7c7984d-5swd4\" (UID: \"a108c235-2a29-4475-9bdf-9df7e15f7776\") " pod="tigera-operator/tigera-operator-68f7c7984d-5swd4" Jun 20 18:24:21.812622 kubelet[3506]: I0620 18:24:21.812665 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a108c235-2a29-4475-9bdf-9df7e15f7776-var-lib-calico\") pod \"tigera-operator-68f7c7984d-5swd4\" (UID: \"a108c235-2a29-4475-9bdf-9df7e15f7776\") " pod="tigera-operator/tigera-operator-68f7c7984d-5swd4" Jun 20 18:24:22.067459 containerd[2019]: time="2025-06-20T18:24:22.067060501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-5swd4,Uid:a108c235-2a29-4475-9bdf-9df7e15f7776,Namespace:tigera-operator,Attempt:0,}" Jun 20 18:24:22.143604 containerd[2019]: time="2025-06-20T18:24:22.143413010Z" level=info msg="connecting to shim b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9" address="unix:///run/containerd/s/150dc55f939ff0d8c8a6980d86f8b3fe1aea384beadeb6216a37d4fc5335f1db" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:22.193658 systemd[1]: Started cri-containerd-b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9.scope - libcontainer container b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9. Jun 20 18:24:22.267255 containerd[2019]: time="2025-06-20T18:24:22.267135566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-5swd4,Uid:a108c235-2a29-4475-9bdf-9df7e15f7776,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9\"" Jun 20 18:24:22.273484 containerd[2019]: time="2025-06-20T18:24:22.273426350Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 18:24:22.283463 containerd[2019]: time="2025-06-20T18:24:22.283401854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4lsp,Uid:34c44d33-a163-4e36-b0b6-285fddf92c82,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:22.326011 containerd[2019]: time="2025-06-20T18:24:22.325569027Z" level=info msg="connecting to shim c3d8d8aa17903bb737f4cb2aae4e2ce6bbe0042560f43d2e578cd880d9b2585b" address="unix:///run/containerd/s/a57483a1d073c4c22f732a19ac008d23c58cc94bfd9b6e260e240864ff2267a9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:22.368657 systemd[1]: Started cri-containerd-c3d8d8aa17903bb737f4cb2aae4e2ce6bbe0042560f43d2e578cd880d9b2585b.scope - libcontainer container c3d8d8aa17903bb737f4cb2aae4e2ce6bbe0042560f43d2e578cd880d9b2585b. Jun 20 18:24:22.431132 containerd[2019]: time="2025-06-20T18:24:22.431057955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4lsp,Uid:34c44d33-a163-4e36-b0b6-285fddf92c82,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3d8d8aa17903bb737f4cb2aae4e2ce6bbe0042560f43d2e578cd880d9b2585b\"" Jun 20 18:24:22.443314 containerd[2019]: time="2025-06-20T18:24:22.442858983Z" level=info msg="CreateContainer within sandbox \"c3d8d8aa17903bb737f4cb2aae4e2ce6bbe0042560f43d2e578cd880d9b2585b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:24:22.463206 containerd[2019]: time="2025-06-20T18:24:22.463147527Z" level=info msg="Container 20b2a41519e88de13d792b555bbd53ec7d931c11568637e763696df21229d767: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:22.479257 containerd[2019]: time="2025-06-20T18:24:22.479189307Z" level=info msg="CreateContainer within sandbox \"c3d8d8aa17903bb737f4cb2aae4e2ce6bbe0042560f43d2e578cd880d9b2585b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20b2a41519e88de13d792b555bbd53ec7d931c11568637e763696df21229d767\"" Jun 20 18:24:22.481560 containerd[2019]: time="2025-06-20T18:24:22.481497063Z" level=info msg="StartContainer for \"20b2a41519e88de13d792b555bbd53ec7d931c11568637e763696df21229d767\"" Jun 20 18:24:22.485806 containerd[2019]: time="2025-06-20T18:24:22.485756751Z" level=info msg="connecting to shim 20b2a41519e88de13d792b555bbd53ec7d931c11568637e763696df21229d767" address="unix:///run/containerd/s/a57483a1d073c4c22f732a19ac008d23c58cc94bfd9b6e260e240864ff2267a9" protocol=ttrpc version=3 Jun 20 18:24:22.518659 systemd[1]: Started cri-containerd-20b2a41519e88de13d792b555bbd53ec7d931c11568637e763696df21229d767.scope - libcontainer container 20b2a41519e88de13d792b555bbd53ec7d931c11568637e763696df21229d767. Jun 20 18:24:22.601963 containerd[2019]: time="2025-06-20T18:24:22.601482040Z" level=info msg="StartContainer for \"20b2a41519e88de13d792b555bbd53ec7d931c11568637e763696df21229d767\" returns successfully" Jun 20 18:24:23.616312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668841589.mount: Deactivated successfully. Jun 20 18:24:23.790565 kubelet[3506]: I0620 18:24:23.790470 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t4lsp" podStartSLOduration=2.790446366 podStartE2EDuration="2.790446366s" podCreationTimestamp="2025-06-20 18:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:23.131643795 +0000 UTC m=+6.430224909" watchObservedRunningTime="2025-06-20 18:24:23.790446366 +0000 UTC m=+7.089027468" Jun 20 18:24:24.396660 containerd[2019]: time="2025-06-20T18:24:24.396589673Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:24.399581 containerd[2019]: time="2025-06-20T18:24:24.399527321Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=22149772" Jun 20 18:24:24.401635 containerd[2019]: time="2025-06-20T18:24:24.401568017Z" level=info msg="ImageCreate event name:\"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:24.409378 containerd[2019]: time="2025-06-20T18:24:24.409219817Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:24.410694 containerd[2019]: time="2025-06-20T18:24:24.410490377Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"22145767\" in 2.136980027s" Jun 20 18:24:24.410694 containerd[2019]: time="2025-06-20T18:24:24.410551001Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\"" Jun 20 18:24:24.420126 containerd[2019]: time="2025-06-20T18:24:24.419530169Z" level=info msg="CreateContainer within sandbox \"b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 18:24:24.437327 containerd[2019]: time="2025-06-20T18:24:24.437262293Z" level=info msg="Container a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:24.446050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969367090.mount: Deactivated successfully. Jun 20 18:24:24.456072 containerd[2019]: time="2025-06-20T18:24:24.456010793Z" level=info msg="CreateContainer within sandbox \"b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\"" Jun 20 18:24:24.458332 containerd[2019]: time="2025-06-20T18:24:24.458109929Z" level=info msg="StartContainer for \"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\"" Jun 20 18:24:24.460858 containerd[2019]: time="2025-06-20T18:24:24.460785773Z" level=info msg="connecting to shim a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae" address="unix:///run/containerd/s/150dc55f939ff0d8c8a6980d86f8b3fe1aea384beadeb6216a37d4fc5335f1db" protocol=ttrpc version=3 Jun 20 18:24:24.499633 systemd[1]: Started cri-containerd-a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae.scope - libcontainer container a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae. Jun 20 18:24:24.556841 containerd[2019]: time="2025-06-20T18:24:24.556705614Z" level=info msg="StartContainer for \"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\" returns successfully" Jun 20 18:24:33.289832 sudo[2383]: pam_unix(sudo:session): session closed for user root Jun 20 18:24:33.318142 sshd[2382]: Connection closed by 139.178.68.195 port 37952 Jun 20 18:24:33.317943 sshd-session[2380]: pam_unix(sshd:session): session closed for user core Jun 20 18:24:33.326958 systemd[1]: sshd@8-172.31.21.135:22-139.178.68.195:37952.service: Deactivated successfully. Jun 20 18:24:33.337933 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:24:33.340494 systemd[1]: session-9.scope: Consumed 9.796s CPU time, 233.5M memory peak. Jun 20 18:24:33.348445 systemd-logind[1989]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:24:33.355362 systemd-logind[1989]: Removed session 9. Jun 20 18:24:43.811049 kubelet[3506]: I0620 18:24:43.809757 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-5swd4" podStartSLOduration=20.666728218 podStartE2EDuration="22.809737405s" podCreationTimestamp="2025-06-20 18:24:21 +0000 UTC" firstStartedPulling="2025-06-20 18:24:22.26985101 +0000 UTC m=+5.568432100" lastFinishedPulling="2025-06-20 18:24:24.412860185 +0000 UTC m=+7.711441287" observedRunningTime="2025-06-20 18:24:25.145003421 +0000 UTC m=+8.443584535" watchObservedRunningTime="2025-06-20 18:24:43.809737405 +0000 UTC m=+27.108318507" Jun 20 18:24:43.836043 systemd[1]: Created slice kubepods-besteffort-pod1be0d6fc_d184_4956_b608_b434342b2179.slice - libcontainer container kubepods-besteffort-pod1be0d6fc_d184_4956_b608_b434342b2179.slice. Jun 20 18:24:43.859021 kubelet[3506]: I0620 18:24:43.857393 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1be0d6fc-d184-4956-b608-b434342b2179-typha-certs\") pod \"calico-typha-8554dfb675-5spcw\" (UID: \"1be0d6fc-d184-4956-b608-b434342b2179\") " pod="calico-system/calico-typha-8554dfb675-5spcw" Jun 20 18:24:43.859021 kubelet[3506]: I0620 18:24:43.857514 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dgb\" (UniqueName: \"kubernetes.io/projected/1be0d6fc-d184-4956-b608-b434342b2179-kube-api-access-p2dgb\") pod \"calico-typha-8554dfb675-5spcw\" (UID: \"1be0d6fc-d184-4956-b608-b434342b2179\") " pod="calico-system/calico-typha-8554dfb675-5spcw" Jun 20 18:24:43.859021 kubelet[3506]: I0620 18:24:43.857598 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1be0d6fc-d184-4956-b608-b434342b2179-tigera-ca-bundle\") pod \"calico-typha-8554dfb675-5spcw\" (UID: \"1be0d6fc-d184-4956-b608-b434342b2179\") " pod="calico-system/calico-typha-8554dfb675-5spcw" Jun 20 18:24:44.150622 containerd[2019]: time="2025-06-20T18:24:44.150102563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8554dfb675-5spcw,Uid:1be0d6fc-d184-4956-b608-b434342b2179,Namespace:calico-system,Attempt:0,}" Jun 20 18:24:44.165436 systemd[1]: Created slice kubepods-besteffort-pod1ba103bf_ad26_4fa6_9d32_cd144252e2dc.slice - libcontainer container kubepods-besteffort-pod1ba103bf_ad26_4fa6_9d32_cd144252e2dc.slice. Jun 20 18:24:44.219731 containerd[2019]: time="2025-06-20T18:24:44.218578751Z" level=info msg="connecting to shim 84b6a46486b0996118a30e7f410cdb8ecf2a4a599bb3f07ca9056df20372422c" address="unix:///run/containerd/s/466d96d76cfac47ed8a5afa456f01157f0034ade3ed1fe6d984bdddade519427" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:44.262198 kubelet[3506]: I0620 18:24:44.262038 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-flexvol-driver-host\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.262681 kubelet[3506]: I0620 18:24:44.262583 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-xtables-lock\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.263167 kubelet[3506]: I0620 18:24:44.263036 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-lib-modules\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.263864 kubelet[3506]: I0620 18:24:44.263380 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-cni-net-dir\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.263864 kubelet[3506]: I0620 18:24:44.263431 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-var-lib-calico\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.263864 kubelet[3506]: I0620 18:24:44.263467 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-policysync\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.263864 kubelet[3506]: I0620 18:24:44.263518 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-cni-log-dir\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.263864 kubelet[3506]: I0620 18:24:44.263554 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-var-run-calico\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.266483 kubelet[3506]: I0620 18:24:44.263592 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-cni-bin-dir\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.266483 kubelet[3506]: I0620 18:24:44.263643 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-node-certs\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.266483 kubelet[3506]: I0620 18:24:44.266169 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-tigera-ca-bundle\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.266909 kubelet[3506]: I0620 18:24:44.266733 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfg6p\" (UniqueName: \"kubernetes.io/projected/1ba103bf-ad26-4fa6-9d32-cd144252e2dc-kube-api-access-mfg6p\") pod \"calico-node-p2gr6\" (UID: \"1ba103bf-ad26-4fa6-9d32-cd144252e2dc\") " pod="calico-system/calico-node-p2gr6" Jun 20 18:24:44.306007 systemd[1]: Started cri-containerd-84b6a46486b0996118a30e7f410cdb8ecf2a4a599bb3f07ca9056df20372422c.scope - libcontainer container 84b6a46486b0996118a30e7f410cdb8ecf2a4a599bb3f07ca9056df20372422c. Jun 20 18:24:44.373664 kubelet[3506]: E0620 18:24:44.373619 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.374444 kubelet[3506]: W0620 18:24:44.374394 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.374980 kubelet[3506]: E0620 18:24:44.374838 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.382070 kubelet[3506]: E0620 18:24:44.382018 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.382220 kubelet[3506]: W0620 18:24:44.382080 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.382220 kubelet[3506]: E0620 18:24:44.382116 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.434869 kubelet[3506]: E0620 18:24:44.433593 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.436312 kubelet[3506]: W0620 18:24:44.435475 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.436312 kubelet[3506]: E0620 18:24:44.435526 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.442043 containerd[2019]: time="2025-06-20T18:24:44.441948745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8554dfb675-5spcw,Uid:1be0d6fc-d184-4956-b608-b434342b2179,Namespace:calico-system,Attempt:0,} returns sandbox id \"84b6a46486b0996118a30e7f410cdb8ecf2a4a599bb3f07ca9056df20372422c\"" Jun 20 18:24:44.445602 containerd[2019]: time="2025-06-20T18:24:44.445539553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 18:24:44.481654 containerd[2019]: time="2025-06-20T18:24:44.481578865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p2gr6,Uid:1ba103bf-ad26-4fa6-9d32-cd144252e2dc,Namespace:calico-system,Attempt:0,}" Jun 20 18:24:44.491923 kubelet[3506]: E0620 18:24:44.491838 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4g8ft" podUID="b8b64f5a-c146-4322-982b-92c0687fe966" Jun 20 18:24:44.530727 kubelet[3506]: E0620 18:24:44.530665 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.530890 kubelet[3506]: W0620 18:24:44.530711 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.530890 kubelet[3506]: E0620 18:24:44.530863 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.532244 kubelet[3506]: E0620 18:24:44.532114 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.532796 kubelet[3506]: W0620 18:24:44.532156 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.532796 kubelet[3506]: E0620 18:24:44.532511 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.534003 kubelet[3506]: E0620 18:24:44.533926 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.534003 kubelet[3506]: W0620 18:24:44.533991 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.534197 kubelet[3506]: E0620 18:24:44.534152 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.536228 kubelet[3506]: E0620 18:24:44.535930 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.536228 kubelet[3506]: W0620 18:24:44.535969 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.536228 kubelet[3506]: E0620 18:24:44.536002 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.538128 kubelet[3506]: E0620 18:24:44.538057 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.538128 kubelet[3506]: W0620 18:24:44.538112 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.538408 kubelet[3506]: E0620 18:24:44.538145 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.540236 kubelet[3506]: E0620 18:24:44.538549 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.540236 kubelet[3506]: W0620 18:24:44.538579 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.540236 kubelet[3506]: E0620 18:24:44.538604 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.540236 kubelet[3506]: E0620 18:24:44.538939 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.540236 kubelet[3506]: W0620 18:24:44.538959 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.540236 kubelet[3506]: E0620 18:24:44.538980 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.540236 kubelet[3506]: E0620 18:24:44.539838 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.540236 kubelet[3506]: W0620 18:24:44.539865 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.540236 kubelet[3506]: E0620 18:24:44.539895 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.540898 kubelet[3506]: E0620 18:24:44.540527 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.540898 kubelet[3506]: W0620 18:24:44.540555 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.540898 kubelet[3506]: E0620 18:24:44.540581 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.542671 kubelet[3506]: E0620 18:24:44.541455 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.542671 kubelet[3506]: W0620 18:24:44.541489 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.542671 kubelet[3506]: E0620 18:24:44.541520 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.542671 kubelet[3506]: E0620 18:24:44.541908 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.542671 kubelet[3506]: W0620 18:24:44.541928 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.542671 kubelet[3506]: E0620 18:24:44.541951 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.542671 kubelet[3506]: E0620 18:24:44.542411 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.542671 kubelet[3506]: W0620 18:24:44.542434 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.542671 kubelet[3506]: E0620 18:24:44.542458 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.545414 kubelet[3506]: E0620 18:24:44.542780 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.545414 kubelet[3506]: W0620 18:24:44.542799 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.545414 kubelet[3506]: E0620 18:24:44.542823 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.545414 kubelet[3506]: E0620 18:24:44.543072 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.545414 kubelet[3506]: W0620 18:24:44.543089 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.545414 kubelet[3506]: E0620 18:24:44.543108 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.545414 kubelet[3506]: E0620 18:24:44.543359 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.545414 kubelet[3506]: W0620 18:24:44.543378 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.545414 kubelet[3506]: E0620 18:24:44.543397 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.545414 kubelet[3506]: E0620 18:24:44.543737 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.546800 kubelet[3506]: W0620 18:24:44.543754 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.546800 kubelet[3506]: E0620 18:24:44.543772 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.546800 kubelet[3506]: E0620 18:24:44.544638 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.546800 kubelet[3506]: W0620 18:24:44.544667 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.546800 kubelet[3506]: E0620 18:24:44.544696 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.546800 kubelet[3506]: E0620 18:24:44.545034 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.546800 kubelet[3506]: W0620 18:24:44.545053 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.546800 kubelet[3506]: E0620 18:24:44.545075 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.546800 kubelet[3506]: E0620 18:24:44.545468 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.546800 kubelet[3506]: W0620 18:24:44.545489 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.547256 kubelet[3506]: E0620 18:24:44.545512 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.547256 kubelet[3506]: E0620 18:24:44.545804 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.547256 kubelet[3506]: W0620 18:24:44.545828 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.547256 kubelet[3506]: E0620 18:24:44.545850 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.550581 containerd[2019]: time="2025-06-20T18:24:44.548319253Z" level=info msg="connecting to shim 7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766" address="unix:///run/containerd/s/027ef5a723aa4afe3e10dea2c0a073ed710bd8c4c75b985c485214cfa8fff1b8" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:44.572856 kubelet[3506]: E0620 18:24:44.572478 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.572856 kubelet[3506]: W0620 18:24:44.572517 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.572856 kubelet[3506]: E0620 18:24:44.572550 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.572856 kubelet[3506]: I0620 18:24:44.572606 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b8b64f5a-c146-4322-982b-92c0687fe966-registration-dir\") pod \"csi-node-driver-4g8ft\" (UID: \"b8b64f5a-c146-4322-982b-92c0687fe966\") " pod="calico-system/csi-node-driver-4g8ft" Jun 20 18:24:44.574533 kubelet[3506]: E0620 18:24:44.574491 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.574961 kubelet[3506]: W0620 18:24:44.574703 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.574961 kubelet[3506]: E0620 18:24:44.574744 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.574961 kubelet[3506]: I0620 18:24:44.574804 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b8b64f5a-c146-4322-982b-92c0687fe966-socket-dir\") pod \"csi-node-driver-4g8ft\" (UID: \"b8b64f5a-c146-4322-982b-92c0687fe966\") " pod="calico-system/csi-node-driver-4g8ft" Jun 20 18:24:44.575887 kubelet[3506]: E0620 18:24:44.575836 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.575887 kubelet[3506]: W0620 18:24:44.575875 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.577725 kubelet[3506]: E0620 18:24:44.575908 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.577725 kubelet[3506]: E0620 18:24:44.577491 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.577725 kubelet[3506]: W0620 18:24:44.577520 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.577725 kubelet[3506]: E0620 18:24:44.577553 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.578922 kubelet[3506]: E0620 18:24:44.578674 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.578922 kubelet[3506]: W0620 18:24:44.578713 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.578922 kubelet[3506]: E0620 18:24:44.578746 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.579725 kubelet[3506]: E0620 18:24:44.579436 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.579725 kubelet[3506]: W0620 18:24:44.579472 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.579725 kubelet[3506]: E0620 18:24:44.579508 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.579725 kubelet[3506]: I0620 18:24:44.579524 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz86g\" (UniqueName: \"kubernetes.io/projected/b8b64f5a-c146-4322-982b-92c0687fe966-kube-api-access-rz86g\") pod \"csi-node-driver-4g8ft\" (UID: \"b8b64f5a-c146-4322-982b-92c0687fe966\") " pod="calico-system/csi-node-driver-4g8ft" Jun 20 18:24:44.580870 kubelet[3506]: E0620 18:24:44.580589 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.580870 kubelet[3506]: W0620 18:24:44.580624 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.580870 kubelet[3506]: E0620 18:24:44.580655 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.581092 kubelet[3506]: E0620 18:24:44.581053 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.581092 kubelet[3506]: W0620 18:24:44.581070 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.581193 kubelet[3506]: E0620 18:24:44.581092 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.582131 kubelet[3506]: E0620 18:24:44.581405 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.582131 kubelet[3506]: W0620 18:24:44.581433 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.582131 kubelet[3506]: E0620 18:24:44.581455 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.582131 kubelet[3506]: E0620 18:24:44.581835 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.582131 kubelet[3506]: W0620 18:24:44.581856 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.582131 kubelet[3506]: E0620 18:24:44.581882 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.582131 kubelet[3506]: I0620 18:24:44.581939 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8b64f5a-c146-4322-982b-92c0687fe966-kubelet-dir\") pod \"csi-node-driver-4g8ft\" (UID: \"b8b64f5a-c146-4322-982b-92c0687fe966\") " pod="calico-system/csi-node-driver-4g8ft" Jun 20 18:24:44.583098 kubelet[3506]: E0620 18:24:44.582284 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.583098 kubelet[3506]: W0620 18:24:44.582307 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.583098 kubelet[3506]: E0620 18:24:44.582331 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.583098 kubelet[3506]: E0620 18:24:44.582728 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.583098 kubelet[3506]: W0620 18:24:44.582750 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.583098 kubelet[3506]: E0620 18:24:44.582772 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.585279 kubelet[3506]: E0620 18:24:44.583469 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.585279 kubelet[3506]: W0620 18:24:44.583495 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.585279 kubelet[3506]: E0620 18:24:44.583522 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.585279 kubelet[3506]: I0620 18:24:44.583574 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b8b64f5a-c146-4322-982b-92c0687fe966-varrun\") pod \"csi-node-driver-4g8ft\" (UID: \"b8b64f5a-c146-4322-982b-92c0687fe966\") " pod="calico-system/csi-node-driver-4g8ft" Jun 20 18:24:44.585279 kubelet[3506]: E0620 18:24:44.584028 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.585279 kubelet[3506]: W0620 18:24:44.584053 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.585279 kubelet[3506]: E0620 18:24:44.584079 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.585279 kubelet[3506]: E0620 18:24:44.584405 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.585279 kubelet[3506]: W0620 18:24:44.584423 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.586835 kubelet[3506]: E0620 18:24:44.584443 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.649637 systemd[1]: Started cri-containerd-7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766.scope - libcontainer container 7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766. Jun 20 18:24:44.685091 kubelet[3506]: E0620 18:24:44.684970 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.685555 kubelet[3506]: W0620 18:24:44.685245 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.685555 kubelet[3506]: E0620 18:24:44.685286 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.686723 kubelet[3506]: E0620 18:24:44.686494 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.687444 kubelet[3506]: W0620 18:24:44.687173 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.688014 kubelet[3506]: E0620 18:24:44.687846 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.688993 kubelet[3506]: E0620 18:24:44.688946 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.688993 kubelet[3506]: W0620 18:24:44.688984 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.689723 kubelet[3506]: E0620 18:24:44.689017 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.689723 kubelet[3506]: E0620 18:24:44.689397 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.689723 kubelet[3506]: W0620 18:24:44.689417 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.689723 kubelet[3506]: E0620 18:24:44.689441 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.689919 kubelet[3506]: E0620 18:24:44.689726 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.689919 kubelet[3506]: W0620 18:24:44.689744 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.689919 kubelet[3506]: E0620 18:24:44.689763 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.691717 kubelet[3506]: E0620 18:24:44.690126 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.691717 kubelet[3506]: W0620 18:24:44.690155 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.691717 kubelet[3506]: E0620 18:24:44.690178 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.691717 kubelet[3506]: E0620 18:24:44.690454 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.691717 kubelet[3506]: W0620 18:24:44.690469 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.691717 kubelet[3506]: E0620 18:24:44.690487 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.694998 kubelet[3506]: E0620 18:24:44.691797 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.694998 kubelet[3506]: W0620 18:24:44.691829 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.694998 kubelet[3506]: E0620 18:24:44.691858 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.694998 kubelet[3506]: E0620 18:24:44.692277 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.694998 kubelet[3506]: W0620 18:24:44.692296 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.694998 kubelet[3506]: E0620 18:24:44.692317 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.694998 kubelet[3506]: E0620 18:24:44.692687 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.694998 kubelet[3506]: W0620 18:24:44.692705 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.694998 kubelet[3506]: E0620 18:24:44.692726 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.694998 kubelet[3506]: E0620 18:24:44.693010 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699017 kubelet[3506]: W0620 18:24:44.693025 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699017 kubelet[3506]: E0620 18:24:44.693043 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699017 kubelet[3506]: E0620 18:24:44.693877 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699017 kubelet[3506]: W0620 18:24:44.693899 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699017 kubelet[3506]: E0620 18:24:44.693926 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699017 kubelet[3506]: E0620 18:24:44.694271 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699017 kubelet[3506]: W0620 18:24:44.694285 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699017 kubelet[3506]: E0620 18:24:44.694304 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699017 kubelet[3506]: E0620 18:24:44.694701 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699017 kubelet[3506]: W0620 18:24:44.694718 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699513 kubelet[3506]: E0620 18:24:44.694738 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699513 kubelet[3506]: E0620 18:24:44.695060 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699513 kubelet[3506]: W0620 18:24:44.695078 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699513 kubelet[3506]: E0620 18:24:44.695097 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699513 kubelet[3506]: E0620 18:24:44.695449 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699513 kubelet[3506]: W0620 18:24:44.695468 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699513 kubelet[3506]: E0620 18:24:44.695488 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699513 kubelet[3506]: E0620 18:24:44.696101 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699513 kubelet[3506]: W0620 18:24:44.696120 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699513 kubelet[3506]: E0620 18:24:44.696140 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699979 kubelet[3506]: E0620 18:24:44.696433 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699979 kubelet[3506]: W0620 18:24:44.696448 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699979 kubelet[3506]: E0620 18:24:44.696467 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699979 kubelet[3506]: E0620 18:24:44.696885 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699979 kubelet[3506]: W0620 18:24:44.696901 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699979 kubelet[3506]: E0620 18:24:44.696919 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699979 kubelet[3506]: E0620 18:24:44.697449 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.699979 kubelet[3506]: W0620 18:24:44.697468 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.699979 kubelet[3506]: E0620 18:24:44.697491 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.699979 kubelet[3506]: E0620 18:24:44.697773 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.701930 kubelet[3506]: W0620 18:24:44.697790 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.701930 kubelet[3506]: E0620 18:24:44.697809 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.701930 kubelet[3506]: E0620 18:24:44.698150 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.701930 kubelet[3506]: W0620 18:24:44.698192 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.701930 kubelet[3506]: E0620 18:24:44.698212 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.701930 kubelet[3506]: E0620 18:24:44.698643 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.701930 kubelet[3506]: W0620 18:24:44.698664 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.701930 kubelet[3506]: E0620 18:24:44.698688 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.701930 kubelet[3506]: E0620 18:24:44.699006 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.701930 kubelet[3506]: W0620 18:24:44.699024 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.705599 kubelet[3506]: E0620 18:24:44.699050 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.705599 kubelet[3506]: E0620 18:24:44.700718 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.705599 kubelet[3506]: W0620 18:24:44.700771 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.705599 kubelet[3506]: E0620 18:24:44.700799 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.759222 kubelet[3506]: E0620 18:24:44.759079 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:44.759222 kubelet[3506]: W0620 18:24:44.759110 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:44.759222 kubelet[3506]: E0620 18:24:44.759162 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:44.811958 containerd[2019]: time="2025-06-20T18:24:44.811854734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p2gr6,Uid:1ba103bf-ad26-4fa6-9d32-cd144252e2dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766\"" Jun 20 18:24:45.963632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557650627.mount: Deactivated successfully. Jun 20 18:24:46.040500 kubelet[3506]: E0620 18:24:46.040401 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4g8ft" podUID="b8b64f5a-c146-4322-982b-92c0687fe966" Jun 20 18:24:47.229694 containerd[2019]: time="2025-06-20T18:24:47.229633550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:47.231634 containerd[2019]: time="2025-06-20T18:24:47.231569402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33070817" Jun 20 18:24:47.234156 containerd[2019]: time="2025-06-20T18:24:47.234060614Z" level=info msg="ImageCreate event name:\"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:47.238876 containerd[2019]: time="2025-06-20T18:24:47.238794230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:47.240122 containerd[2019]: time="2025-06-20T18:24:47.239922770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"33070671\" in 2.794317853s" Jun 20 18:24:47.240122 containerd[2019]: time="2025-06-20T18:24:47.239974298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\"" Jun 20 18:24:47.242919 containerd[2019]: time="2025-06-20T18:24:47.242779898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 18:24:47.276377 containerd[2019]: time="2025-06-20T18:24:47.276300963Z" level=info msg="CreateContainer within sandbox \"84b6a46486b0996118a30e7f410cdb8ecf2a4a599bb3f07ca9056df20372422c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 18:24:47.295991 containerd[2019]: time="2025-06-20T18:24:47.295928811Z" level=info msg="Container afa97ab764fa233b93e35848cbb089b7dbae986c105e3ceb68419505b5e56a34: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:47.305235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214610747.mount: Deactivated successfully. Jun 20 18:24:47.320373 containerd[2019]: time="2025-06-20T18:24:47.319864275Z" level=info msg="CreateContainer within sandbox \"84b6a46486b0996118a30e7f410cdb8ecf2a4a599bb3f07ca9056df20372422c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"afa97ab764fa233b93e35848cbb089b7dbae986c105e3ceb68419505b5e56a34\"" Jun 20 18:24:47.322377 containerd[2019]: time="2025-06-20T18:24:47.321008355Z" level=info msg="StartContainer for \"afa97ab764fa233b93e35848cbb089b7dbae986c105e3ceb68419505b5e56a34\"" Jun 20 18:24:47.324885 containerd[2019]: time="2025-06-20T18:24:47.324756567Z" level=info msg="connecting to shim afa97ab764fa233b93e35848cbb089b7dbae986c105e3ceb68419505b5e56a34" address="unix:///run/containerd/s/466d96d76cfac47ed8a5afa456f01157f0034ade3ed1fe6d984bdddade519427" protocol=ttrpc version=3 Jun 20 18:24:47.388680 systemd[1]: Started cri-containerd-afa97ab764fa233b93e35848cbb089b7dbae986c105e3ceb68419505b5e56a34.scope - libcontainer container afa97ab764fa233b93e35848cbb089b7dbae986c105e3ceb68419505b5e56a34. Jun 20 18:24:47.492750 containerd[2019]: time="2025-06-20T18:24:47.492474832Z" level=info msg="StartContainer for \"afa97ab764fa233b93e35848cbb089b7dbae986c105e3ceb68419505b5e56a34\" returns successfully" Jun 20 18:24:48.040006 kubelet[3506]: E0620 18:24:48.039918 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4g8ft" podUID="b8b64f5a-c146-4322-982b-92c0687fe966" Jun 20 18:24:48.267293 kubelet[3506]: I0620 18:24:48.266486 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8554dfb675-5spcw" podStartSLOduration=2.469400559 podStartE2EDuration="5.266466316s" podCreationTimestamp="2025-06-20 18:24:43 +0000 UTC" firstStartedPulling="2025-06-20 18:24:44.444679717 +0000 UTC m=+27.743260807" lastFinishedPulling="2025-06-20 18:24:47.241745474 +0000 UTC m=+30.540326564" observedRunningTime="2025-06-20 18:24:48.262458292 +0000 UTC m=+31.561039418" watchObservedRunningTime="2025-06-20 18:24:48.266466316 +0000 UTC m=+31.565047418" Jun 20 18:24:48.276615 kubelet[3506]: E0620 18:24:48.276565 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.276739 kubelet[3506]: W0620 18:24:48.276635 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.276739 kubelet[3506]: E0620 18:24:48.276671 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.277226 kubelet[3506]: E0620 18:24:48.277191 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.277332 kubelet[3506]: W0620 18:24:48.277218 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.277441 kubelet[3506]: E0620 18:24:48.277421 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.277938 kubelet[3506]: E0620 18:24:48.277905 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.278006 kubelet[3506]: W0620 18:24:48.277964 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.278006 kubelet[3506]: E0620 18:24:48.277993 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.279181 kubelet[3506]: E0620 18:24:48.278606 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.279181 kubelet[3506]: W0620 18:24:48.278634 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.279181 kubelet[3506]: E0620 18:24:48.278692 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.279448 kubelet[3506]: E0620 18:24:48.279236 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.279448 kubelet[3506]: W0620 18:24:48.279255 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.279448 kubelet[3506]: E0620 18:24:48.279321 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.279843 kubelet[3506]: E0620 18:24:48.279795 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.279843 kubelet[3506]: W0620 18:24:48.279828 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.279978 kubelet[3506]: E0620 18:24:48.279851 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.280214 kubelet[3506]: E0620 18:24:48.280175 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.280214 kubelet[3506]: W0620 18:24:48.280204 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.280316 kubelet[3506]: E0620 18:24:48.280226 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.280601 kubelet[3506]: E0620 18:24:48.280562 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.280601 kubelet[3506]: W0620 18:24:48.280589 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.280733 kubelet[3506]: E0620 18:24:48.280611 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.280983 kubelet[3506]: E0620 18:24:48.280944 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.280983 kubelet[3506]: W0620 18:24:48.280972 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.281083 kubelet[3506]: E0620 18:24:48.280993 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.281318 kubelet[3506]: E0620 18:24:48.281291 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.281443 kubelet[3506]: W0620 18:24:48.281315 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.281443 kubelet[3506]: E0620 18:24:48.281365 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.281733 kubelet[3506]: E0620 18:24:48.281705 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.281792 kubelet[3506]: W0620 18:24:48.281730 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.281792 kubelet[3506]: E0620 18:24:48.281751 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.282085 kubelet[3506]: E0620 18:24:48.282059 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.282159 kubelet[3506]: W0620 18:24:48.282083 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.282159 kubelet[3506]: E0620 18:24:48.282104 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.282506 kubelet[3506]: E0620 18:24:48.282479 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.282568 kubelet[3506]: W0620 18:24:48.282503 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.282568 kubelet[3506]: E0620 18:24:48.282524 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.282843 kubelet[3506]: E0620 18:24:48.282800 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.282843 kubelet[3506]: W0620 18:24:48.282830 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.283048 kubelet[3506]: E0620 18:24:48.282851 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.283214 kubelet[3506]: E0620 18:24:48.283188 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.283282 kubelet[3506]: W0620 18:24:48.283212 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.283282 kubelet[3506]: E0620 18:24:48.283233 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.326029 kubelet[3506]: E0620 18:24:48.325899 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.326029 kubelet[3506]: W0620 18:24:48.325932 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.326029 kubelet[3506]: E0620 18:24:48.325960 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.327654 kubelet[3506]: E0620 18:24:48.327281 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.327654 kubelet[3506]: W0620 18:24:48.327389 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.327654 kubelet[3506]: E0620 18:24:48.327432 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.328097 kubelet[3506]: E0620 18:24:48.328074 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.328213 kubelet[3506]: W0620 18:24:48.328190 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.328326 kubelet[3506]: E0620 18:24:48.328295 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.328739 kubelet[3506]: E0620 18:24:48.328719 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.328891 kubelet[3506]: W0620 18:24:48.328817 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.329179 kubelet[3506]: E0620 18:24:48.328969 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.329431 kubelet[3506]: E0620 18:24:48.329410 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.329539 kubelet[3506]: W0620 18:24:48.329518 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.329646 kubelet[3506]: E0620 18:24:48.329624 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.330009 kubelet[3506]: E0620 18:24:48.329988 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.330312 kubelet[3506]: W0620 18:24:48.330102 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.330312 kubelet[3506]: E0620 18:24:48.330129 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.331038 kubelet[3506]: E0620 18:24:48.331012 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.331148 kubelet[3506]: W0620 18:24:48.331125 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.331642 kubelet[3506]: E0620 18:24:48.331603 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.332174 kubelet[3506]: E0620 18:24:48.332129 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.332174 kubelet[3506]: W0620 18:24:48.332162 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.332330 kubelet[3506]: E0620 18:24:48.332188 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.332629 kubelet[3506]: E0620 18:24:48.332600 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.332692 kubelet[3506]: W0620 18:24:48.332628 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.332692 kubelet[3506]: E0620 18:24:48.332652 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.333031 kubelet[3506]: E0620 18:24:48.332990 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.333031 kubelet[3506]: W0620 18:24:48.333016 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.333170 kubelet[3506]: E0620 18:24:48.333038 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.333402 kubelet[3506]: E0620 18:24:48.333374 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.333488 kubelet[3506]: W0620 18:24:48.333400 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.333488 kubelet[3506]: E0620 18:24:48.333421 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.333807 kubelet[3506]: E0620 18:24:48.333782 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.333878 kubelet[3506]: W0620 18:24:48.333805 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.333878 kubelet[3506]: E0620 18:24:48.333825 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.334115 kubelet[3506]: E0620 18:24:48.334084 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.334188 kubelet[3506]: W0620 18:24:48.334113 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.334188 kubelet[3506]: E0620 18:24:48.334140 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.334554 kubelet[3506]: E0620 18:24:48.334528 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.334615 kubelet[3506]: W0620 18:24:48.334552 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.334615 kubelet[3506]: E0620 18:24:48.334573 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.335161 kubelet[3506]: E0620 18:24:48.335135 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.335161 kubelet[3506]: W0620 18:24:48.335159 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.335267 kubelet[3506]: E0620 18:24:48.335181 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.335580 kubelet[3506]: E0620 18:24:48.335553 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.335673 kubelet[3506]: W0620 18:24:48.335578 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.335673 kubelet[3506]: E0620 18:24:48.335601 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.336164 kubelet[3506]: E0620 18:24:48.336125 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.336383 kubelet[3506]: W0620 18:24:48.336291 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.336460 kubelet[3506]: E0620 18:24:48.336324 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:48.336937 kubelet[3506]: E0620 18:24:48.336861 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:48.336937 kubelet[3506]: W0620 18:24:48.336883 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:48.336937 kubelet[3506]: E0620 18:24:48.336904 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.234722 containerd[2019]: time="2025-06-20T18:24:49.234439780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:49.236634 containerd[2019]: time="2025-06-20T18:24:49.236561404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4264319" Jun 20 18:24:49.239415 kubelet[3506]: I0620 18:24:49.239324 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:24:49.240678 containerd[2019]: time="2025-06-20T18:24:49.240287128Z" level=info msg="ImageCreate event name:\"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:49.244758 containerd[2019]: time="2025-06-20T18:24:49.244678816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:49.246156 containerd[2019]: time="2025-06-20T18:24:49.245949628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5633520\" in 2.00310901s" Jun 20 18:24:49.246156 containerd[2019]: time="2025-06-20T18:24:49.246007756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\"" Jun 20 18:24:49.254652 containerd[2019]: time="2025-06-20T18:24:49.254532796Z" level=info msg="CreateContainer within sandbox \"7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 18:24:49.276934 containerd[2019]: time="2025-06-20T18:24:49.276857417Z" level=info msg="Container ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:49.285774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814526083.mount: Deactivated successfully. Jun 20 18:24:49.289981 kubelet[3506]: E0620 18:24:49.289941 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.290146 kubelet[3506]: W0620 18:24:49.289999 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.290146 kubelet[3506]: E0620 18:24:49.290032 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.290583 kubelet[3506]: E0620 18:24:49.290552 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.290668 kubelet[3506]: W0620 18:24:49.290580 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.290668 kubelet[3506]: E0620 18:24:49.290633 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.291003 kubelet[3506]: E0620 18:24:49.290975 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.291003 kubelet[3506]: W0620 18:24:49.291000 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.291137 kubelet[3506]: E0620 18:24:49.291021 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.291490 kubelet[3506]: E0620 18:24:49.291457 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.291583 kubelet[3506]: W0620 18:24:49.291493 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.291583 kubelet[3506]: E0620 18:24:49.291517 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.292153 kubelet[3506]: E0620 18:24:49.292121 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.292268 kubelet[3506]: W0620 18:24:49.292151 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.292268 kubelet[3506]: E0620 18:24:49.292178 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.292550 kubelet[3506]: E0620 18:24:49.292522 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.292615 kubelet[3506]: W0620 18:24:49.292552 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.292615 kubelet[3506]: E0620 18:24:49.292578 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.293051 kubelet[3506]: E0620 18:24:49.293020 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.293051 kubelet[3506]: W0620 18:24:49.293048 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.293205 kubelet[3506]: E0620 18:24:49.293073 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.293412 kubelet[3506]: E0620 18:24:49.293375 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.293412 kubelet[3506]: W0620 18:24:49.293401 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.294243 kubelet[3506]: E0620 18:24:49.293424 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.294243 kubelet[3506]: E0620 18:24:49.293739 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.294243 kubelet[3506]: W0620 18:24:49.293755 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.294243 kubelet[3506]: E0620 18:24:49.293773 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.294243 kubelet[3506]: E0620 18:24:49.294046 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.294243 kubelet[3506]: W0620 18:24:49.294061 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.294243 kubelet[3506]: E0620 18:24:49.294079 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.294713 kubelet[3506]: E0620 18:24:49.294330 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.294713 kubelet[3506]: W0620 18:24:49.294381 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.294713 kubelet[3506]: E0620 18:24:49.294401 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.294713 kubelet[3506]: E0620 18:24:49.294672 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.294713 kubelet[3506]: W0620 18:24:49.294688 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.294713 kubelet[3506]: E0620 18:24:49.294706 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.295666 kubelet[3506]: E0620 18:24:49.295636 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.295923 kubelet[3506]: W0620 18:24:49.295791 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.295923 kubelet[3506]: E0620 18:24:49.295827 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.296390 kubelet[3506]: E0620 18:24:49.296327 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.296560 kubelet[3506]: W0620 18:24:49.296520 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.296839 kubelet[3506]: E0620 18:24:49.296776 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.297934 kubelet[3506]: E0620 18:24:49.297750 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.297934 kubelet[3506]: W0620 18:24:49.297781 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.297934 kubelet[3506]: E0620 18:24:49.297809 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.298653 containerd[2019]: time="2025-06-20T18:24:49.298581353Z" level=info msg="CreateContainer within sandbox \"7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811\"" Jun 20 18:24:49.299367 containerd[2019]: time="2025-06-20T18:24:49.299291357Z" level=info msg="StartContainer for \"ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811\"" Jun 20 18:24:49.303899 containerd[2019]: time="2025-06-20T18:24:49.303827753Z" level=info msg="connecting to shim ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811" address="unix:///run/containerd/s/027ef5a723aa4afe3e10dea2c0a073ed710bd8c4c75b985c485214cfa8fff1b8" protocol=ttrpc version=3 Jun 20 18:24:49.340130 kubelet[3506]: E0620 18:24:49.340094 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.340752 kubelet[3506]: W0620 18:24:49.340706 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.340849 kubelet[3506]: E0620 18:24:49.340757 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.341250 kubelet[3506]: E0620 18:24:49.341213 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.341250 kubelet[3506]: W0620 18:24:49.341246 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.341434 kubelet[3506]: E0620 18:24:49.341272 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.342041 kubelet[3506]: E0620 18:24:49.341614 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.342041 kubelet[3506]: W0620 18:24:49.341633 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.342041 kubelet[3506]: E0620 18:24:49.341655 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.342672 kubelet[3506]: E0620 18:24:49.342635 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.343507 kubelet[3506]: W0620 18:24:49.343433 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.343629 kubelet[3506]: E0620 18:24:49.343508 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.344050 kubelet[3506]: E0620 18:24:49.344017 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.344175 kubelet[3506]: W0620 18:24:49.344048 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.344175 kubelet[3506]: E0620 18:24:49.344075 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.344697 kubelet[3506]: E0620 18:24:49.344651 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.344697 kubelet[3506]: W0620 18:24:49.344684 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.344828 kubelet[3506]: E0620 18:24:49.344713 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.345266 kubelet[3506]: E0620 18:24:49.345031 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.345266 kubelet[3506]: W0620 18:24:49.345058 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.345266 kubelet[3506]: E0620 18:24:49.345080 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.345706 kubelet[3506]: E0620 18:24:49.345679 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.345706 kubelet[3506]: W0620 18:24:49.345700 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.345809 kubelet[3506]: E0620 18:24:49.345723 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.346285 kubelet[3506]: E0620 18:24:49.346029 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.346285 kubelet[3506]: W0620 18:24:49.346058 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.346285 kubelet[3506]: E0620 18:24:49.346082 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.346978 kubelet[3506]: E0620 18:24:49.346942 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.347054 kubelet[3506]: W0620 18:24:49.346975 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.347111 kubelet[3506]: E0620 18:24:49.347070 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.348040 kubelet[3506]: E0620 18:24:49.347406 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.348040 kubelet[3506]: W0620 18:24:49.347434 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.348040 kubelet[3506]: E0620 18:24:49.347458 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.347664 systemd[1]: Started cri-containerd-ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811.scope - libcontainer container ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811. Jun 20 18:24:49.349479 kubelet[3506]: E0620 18:24:49.349401 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.349479 kubelet[3506]: W0620 18:24:49.349431 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.350453 kubelet[3506]: E0620 18:24:49.349585 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.350916 kubelet[3506]: E0620 18:24:49.350871 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.351004 kubelet[3506]: W0620 18:24:49.350913 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.351004 kubelet[3506]: E0620 18:24:49.350973 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.351682 kubelet[3506]: E0620 18:24:49.351516 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.351682 kubelet[3506]: W0620 18:24:49.351671 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.351857 kubelet[3506]: E0620 18:24:49.351701 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.352389 kubelet[3506]: E0620 18:24:49.352151 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.352389 kubelet[3506]: W0620 18:24:49.352178 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.352389 kubelet[3506]: E0620 18:24:49.352236 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.353040 kubelet[3506]: E0620 18:24:49.352987 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.353151 kubelet[3506]: W0620 18:24:49.353049 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.353151 kubelet[3506]: E0620 18:24:49.353078 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.354708 kubelet[3506]: E0620 18:24:49.354633 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.354708 kubelet[3506]: W0620 18:24:49.354669 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.354898 kubelet[3506]: E0620 18:24:49.354740 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.356818 kubelet[3506]: E0620 18:24:49.355940 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:24:49.356818 kubelet[3506]: W0620 18:24:49.355976 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:24:49.356818 kubelet[3506]: E0620 18:24:49.356005 3506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:24:49.430080 containerd[2019]: time="2025-06-20T18:24:49.430007381Z" level=info msg="StartContainer for \"ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811\" returns successfully" Jun 20 18:24:49.452983 systemd[1]: cri-containerd-ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811.scope: Deactivated successfully. Jun 20 18:24:49.460184 containerd[2019]: time="2025-06-20T18:24:49.460123073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811\" id:\"ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811\" pid:4207 exited_at:{seconds:1750443889 nanos:459395633}" Jun 20 18:24:49.460601 containerd[2019]: time="2025-06-20T18:24:49.460532861Z" level=info msg="received exit event container_id:\"ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811\" id:\"ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811\" pid:4207 exited_at:{seconds:1750443889 nanos:459395633}" Jun 20 18:24:49.502053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca3000e90a1a620488b67c6106f81ce45e64d820d025b4a039fd5fea6a31b811-rootfs.mount: Deactivated successfully. Jun 20 18:24:50.040595 kubelet[3506]: E0620 18:24:50.040477 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4g8ft" podUID="b8b64f5a-c146-4322-982b-92c0687fe966" Jun 20 18:24:50.256291 containerd[2019]: time="2025-06-20T18:24:50.255921329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 18:24:52.041890 kubelet[3506]: E0620 18:24:52.041779 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4g8ft" podUID="b8b64f5a-c146-4322-982b-92c0687fe966" Jun 20 18:24:53.292086 containerd[2019]: time="2025-06-20T18:24:53.291963848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:53.293938 containerd[2019]: time="2025-06-20T18:24:53.293877452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=65872909" Jun 20 18:24:53.296514 containerd[2019]: time="2025-06-20T18:24:53.296385897Z" level=info msg="ImageCreate event name:\"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:53.301514 containerd[2019]: time="2025-06-20T18:24:53.301432665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:53.303034 containerd[2019]: time="2025-06-20T18:24:53.302152173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"67242150\" in 3.046164628s" Jun 20 18:24:53.303034 containerd[2019]: time="2025-06-20T18:24:53.302204385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\"" Jun 20 18:24:53.311868 containerd[2019]: time="2025-06-20T18:24:53.311797497Z" level=info msg="CreateContainer within sandbox \"7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 18:24:53.333209 containerd[2019]: time="2025-06-20T18:24:53.333129789Z" level=info msg="Container e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:53.341894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267460093.mount: Deactivated successfully. Jun 20 18:24:53.358762 containerd[2019]: time="2025-06-20T18:24:53.358710585Z" level=info msg="CreateContainer within sandbox \"7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c\"" Jun 20 18:24:53.360572 containerd[2019]: time="2025-06-20T18:24:53.360510201Z" level=info msg="StartContainer for \"e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c\"" Jun 20 18:24:53.370548 containerd[2019]: time="2025-06-20T18:24:53.370487649Z" level=info msg="connecting to shim e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c" address="unix:///run/containerd/s/027ef5a723aa4afe3e10dea2c0a073ed710bd8c4c75b985c485214cfa8fff1b8" protocol=ttrpc version=3 Jun 20 18:24:53.420710 systemd[1]: Started cri-containerd-e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c.scope - libcontainer container e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c. Jun 20 18:24:53.504145 containerd[2019]: time="2025-06-20T18:24:53.503982730Z" level=info msg="StartContainer for \"e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c\" returns successfully" Jun 20 18:24:54.040181 kubelet[3506]: E0620 18:24:54.040097 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4g8ft" podUID="b8b64f5a-c146-4322-982b-92c0687fe966" Jun 20 18:24:54.395835 containerd[2019]: time="2025-06-20T18:24:54.395446258Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:24:54.401017 systemd[1]: cri-containerd-e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c.scope: Deactivated successfully. Jun 20 18:24:54.401618 systemd[1]: cri-containerd-e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c.scope: Consumed 898ms CPU time, 186.8M memory peak, 165.8M written to disk. Jun 20 18:24:54.405855 containerd[2019]: time="2025-06-20T18:24:54.405664450Z" level=info msg="received exit event container_id:\"e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c\" id:\"e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c\" pid:4273 exited_at:{seconds:1750443894 nanos:404902606}" Jun 20 18:24:54.406238 containerd[2019]: time="2025-06-20T18:24:54.406179478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c\" id:\"e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c\" pid:4273 exited_at:{seconds:1750443894 nanos:404902606}" Jun 20 18:24:54.447373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9c78a3213316339f96fce83add0b2c094efe5a8f3293d7fe21ad98c7503c16c-rootfs.mount: Deactivated successfully. Jun 20 18:24:54.507389 kubelet[3506]: I0620 18:24:54.507206 3506 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:24:54.686531 kubelet[3506]: I0620 18:24:54.680536 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlmj\" (UniqueName: \"kubernetes.io/projected/d73292d8-ab80-4249-9e4c-886ba65bc635-kube-api-access-cqlmj\") pod \"coredns-674b8bbfcf-xqdqt\" (UID: \"d73292d8-ab80-4249-9e4c-886ba65bc635\") " pod="kube-system/coredns-674b8bbfcf-xqdqt" Jun 20 18:24:54.686531 kubelet[3506]: I0620 18:24:54.680685 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d73292d8-ab80-4249-9e4c-886ba65bc635-config-volume\") pod \"coredns-674b8bbfcf-xqdqt\" (UID: \"d73292d8-ab80-4249-9e4c-886ba65bc635\") " pod="kube-system/coredns-674b8bbfcf-xqdqt" Jun 20 18:24:54.732167 systemd[1]: Created slice kubepods-burstable-podd73292d8_ab80_4249_9e4c_886ba65bc635.slice - libcontainer container kubepods-burstable-podd73292d8_ab80_4249_9e4c_886ba65bc635.slice. Jun 20 18:24:54.782689 kubelet[3506]: I0620 18:24:54.782628 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdvk\" (UniqueName: \"kubernetes.io/projected/e80ef963-bd5b-47f9-8662-9e335330ec91-kube-api-access-zsdvk\") pod \"coredns-674b8bbfcf-bsdqh\" (UID: \"e80ef963-bd5b-47f9-8662-9e335330ec91\") " pod="kube-system/coredns-674b8bbfcf-bsdqh" Jun 20 18:24:54.785312 kubelet[3506]: I0620 18:24:54.783183 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e80ef963-bd5b-47f9-8662-9e335330ec91-config-volume\") pod \"coredns-674b8bbfcf-bsdqh\" (UID: \"e80ef963-bd5b-47f9-8662-9e335330ec91\") " pod="kube-system/coredns-674b8bbfcf-bsdqh" Jun 20 18:24:54.788323 systemd[1]: Created slice kubepods-burstable-pode80ef963_bd5b_47f9_8662_9e335330ec91.slice - libcontainer container kubepods-burstable-pode80ef963_bd5b_47f9_8662_9e335330ec91.slice. Jun 20 18:24:54.809308 systemd[1]: Created slice kubepods-besteffort-pode81c6d23_91be_4c5a_a154_8ce97a0e9ef7.slice - libcontainer container kubepods-besteffort-pode81c6d23_91be_4c5a_a154_8ce97a0e9ef7.slice. Jun 20 18:24:54.867003 systemd[1]: Created slice kubepods-besteffort-pode2c00cb4_bcb0_4fe2_b1e0_1274cd455b84.slice - libcontainer container kubepods-besteffort-pode2c00cb4_bcb0_4fe2_b1e0_1274cd455b84.slice. Jun 20 18:24:54.884536 kubelet[3506]: I0620 18:24:54.884461 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sps2k\" (UniqueName: \"kubernetes.io/projected/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-kube-api-access-sps2k\") pod \"calico-apiserver-65fd8789dd-9nwnw\" (UID: \"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7\") " pod="calico-apiserver/calico-apiserver-65fd8789dd-9nwnw" Jun 20 18:24:54.905519 kubelet[3506]: I0620 18:24:54.884877 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-calico-apiserver-certs\") pod \"calico-apiserver-65fd8789dd-9nwnw\" (UID: \"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7\") " pod="calico-apiserver/calico-apiserver-65fd8789dd-9nwnw" Jun 20 18:24:54.986429 kubelet[3506]: I0620 18:24:54.986271 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84-tigera-ca-bundle\") pod \"calico-kube-controllers-86696f5875-4m9rw\" (UID: \"e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84\") " pod="calico-system/calico-kube-controllers-86696f5875-4m9rw" Jun 20 18:24:54.986569 kubelet[3506]: I0620 18:24:54.986478 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cbgc\" (UniqueName: \"kubernetes.io/projected/e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84-kube-api-access-4cbgc\") pod \"calico-kube-controllers-86696f5875-4m9rw\" (UID: \"e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84\") " pod="calico-system/calico-kube-controllers-86696f5875-4m9rw" Jun 20 18:24:54.986569 kubelet[3506]: I0620 18:24:54.986549 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2c4451a6-cf44-4ddd-b939-50431d8471fd-calico-apiserver-certs\") pod \"calico-apiserver-599868b7d5-p9dk8\" (UID: \"2c4451a6-cf44-4ddd-b939-50431d8471fd\") " pod="calico-apiserver/calico-apiserver-599868b7d5-p9dk8" Jun 20 18:24:54.986716 kubelet[3506]: I0620 18:24:54.986613 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh6sv\" (UniqueName: \"kubernetes.io/projected/2c4451a6-cf44-4ddd-b939-50431d8471fd-kube-api-access-gh6sv\") pod \"calico-apiserver-599868b7d5-p9dk8\" (UID: \"2c4451a6-cf44-4ddd-b939-50431d8471fd\") " pod="calico-apiserver/calico-apiserver-599868b7d5-p9dk8" Jun 20 18:24:55.024145 systemd[1]: Created slice kubepods-besteffort-pod9ccf7c28_46e1_4e76_b9e7_f3d8e8500ea5.slice - libcontainer container kubepods-besteffort-pod9ccf7c28_46e1_4e76_b9e7_f3d8e8500ea5.slice. Jun 20 18:24:55.058484 containerd[2019]: time="2025-06-20T18:24:55.058400649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqdqt,Uid:d73292d8-ab80-4249-9e4c-886ba65bc635,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:55.088368 kubelet[3506]: I0620 18:24:55.087643 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9cwx\" (UniqueName: \"kubernetes.io/projected/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-kube-api-access-w9cwx\") pod \"calico-apiserver-65fd8789dd-k4vsj\" (UID: \"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5\") " pod="calico-apiserver/calico-apiserver-65fd8789dd-k4vsj" Jun 20 18:24:55.089944 kubelet[3506]: I0620 18:24:55.089508 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-calico-apiserver-certs\") pod \"calico-apiserver-65fd8789dd-k4vsj\" (UID: \"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5\") " pod="calico-apiserver/calico-apiserver-65fd8789dd-k4vsj" Jun 20 18:24:55.111683 systemd[1]: Created slice kubepods-besteffort-pod2c4451a6_cf44_4ddd_b939_50431d8471fd.slice - libcontainer container kubepods-besteffort-pod2c4451a6_cf44_4ddd_b939_50431d8471fd.slice. Jun 20 18:24:55.132892 containerd[2019]: time="2025-06-20T18:24:55.131681950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-9nwnw,Uid:e81c6d23-91be-4c5a-a154-8ce97a0e9ef7,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:24:55.138874 containerd[2019]: time="2025-06-20T18:24:55.138798322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bsdqh,Uid:e80ef963-bd5b-47f9-8662-9e335330ec91,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:55.184737 containerd[2019]: time="2025-06-20T18:24:55.184204366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86696f5875-4m9rw,Uid:e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84,Namespace:calico-system,Attempt:0,}" Jun 20 18:24:55.187241 systemd[1]: Created slice kubepods-besteffort-pod71089506_2ab6_49c2_9e3f_bb88117d854f.slice - libcontainer container kubepods-besteffort-pod71089506_2ab6_49c2_9e3f_bb88117d854f.slice. Jun 20 18:24:55.195100 kubelet[3506]: I0620 18:24:55.192245 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71089506-2ab6-49c2-9e3f-bb88117d854f-config\") pod \"goldmane-5bd85449d4-5d28l\" (UID: \"71089506-2ab6-49c2-9e3f-bb88117d854f\") " pod="calico-system/goldmane-5bd85449d4-5d28l" Jun 20 18:24:55.195100 kubelet[3506]: I0620 18:24:55.193694 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71089506-2ab6-49c2-9e3f-bb88117d854f-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-5d28l\" (UID: \"71089506-2ab6-49c2-9e3f-bb88117d854f\") " pod="calico-system/goldmane-5bd85449d4-5d28l" Jun 20 18:24:55.195100 kubelet[3506]: I0620 18:24:55.193741 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zphnv\" (UniqueName: \"kubernetes.io/projected/f2e148c1-b7bb-4f74-a08c-d254941412bc-kube-api-access-zphnv\") pod \"whisker-6977c77976-87mkt\" (UID: \"f2e148c1-b7bb-4f74-a08c-d254941412bc\") " pod="calico-system/whisker-6977c77976-87mkt" Jun 20 18:24:55.195100 kubelet[3506]: I0620 18:24:55.193821 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/71089506-2ab6-49c2-9e3f-bb88117d854f-goldmane-key-pair\") pod \"goldmane-5bd85449d4-5d28l\" (UID: \"71089506-2ab6-49c2-9e3f-bb88117d854f\") " pod="calico-system/goldmane-5bd85449d4-5d28l" Jun 20 18:24:55.195100 kubelet[3506]: I0620 18:24:55.193871 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67jq\" (UniqueName: \"kubernetes.io/projected/71089506-2ab6-49c2-9e3f-bb88117d854f-kube-api-access-l67jq\") pod \"goldmane-5bd85449d4-5d28l\" (UID: \"71089506-2ab6-49c2-9e3f-bb88117d854f\") " pod="calico-system/goldmane-5bd85449d4-5d28l" Jun 20 18:24:55.195524 kubelet[3506]: I0620 18:24:55.193933 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-ca-bundle\") pod \"whisker-6977c77976-87mkt\" (UID: \"f2e148c1-b7bb-4f74-a08c-d254941412bc\") " pod="calico-system/whisker-6977c77976-87mkt" Jun 20 18:24:55.195524 kubelet[3506]: I0620 18:24:55.193988 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-backend-key-pair\") pod \"whisker-6977c77976-87mkt\" (UID: \"f2e148c1-b7bb-4f74-a08c-d254941412bc\") " pod="calico-system/whisker-6977c77976-87mkt" Jun 20 18:24:55.218210 systemd[1]: Created slice kubepods-besteffort-podf2e148c1_b7bb_4f74_a08c_d254941412bc.slice - libcontainer container kubepods-besteffort-podf2e148c1_b7bb_4f74_a08c_d254941412bc.slice. Jun 20 18:24:55.324332 containerd[2019]: time="2025-06-20T18:24:55.324271367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 18:24:55.368209 containerd[2019]: time="2025-06-20T18:24:55.367425407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-k4vsj,Uid:9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:24:55.484605 containerd[2019]: time="2025-06-20T18:24:55.484530767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599868b7d5-p9dk8,Uid:2c4451a6-cf44-4ddd-b939-50431d8471fd,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:24:55.521750 containerd[2019]: time="2025-06-20T18:24:55.521698884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5d28l,Uid:71089506-2ab6-49c2-9e3f-bb88117d854f,Namespace:calico-system,Attempt:0,}" Jun 20 18:24:55.536587 containerd[2019]: time="2025-06-20T18:24:55.534229752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6977c77976-87mkt,Uid:f2e148c1-b7bb-4f74-a08c-d254941412bc,Namespace:calico-system,Attempt:0,}" Jun 20 18:24:55.740616 containerd[2019]: time="2025-06-20T18:24:55.740304277Z" level=error msg="Failed to destroy network for sandbox \"567048eac8496be443dfe015534515a10c3e8d7a60b323adaa340232fe56654d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.749549 systemd[1]: run-netns-cni\x2d18338171\x2da34c\x2dabe9\x2d48a7\x2d891ea8826a44.mount: Deactivated successfully. Jun 20 18:24:55.755575 containerd[2019]: time="2025-06-20T18:24:55.755501329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bsdqh,Uid:e80ef963-bd5b-47f9-8662-9e335330ec91,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"567048eac8496be443dfe015534515a10c3e8d7a60b323adaa340232fe56654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.756276 kubelet[3506]: E0620 18:24:55.756209 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567048eac8496be443dfe015534515a10c3e8d7a60b323adaa340232fe56654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.756479 kubelet[3506]: E0620 18:24:55.756304 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567048eac8496be443dfe015534515a10c3e8d7a60b323adaa340232fe56654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bsdqh" Jun 20 18:24:55.757461 kubelet[3506]: E0620 18:24:55.757376 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"567048eac8496be443dfe015534515a10c3e8d7a60b323adaa340232fe56654d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bsdqh" Jun 20 18:24:55.759009 kubelet[3506]: E0620 18:24:55.757526 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bsdqh_kube-system(e80ef963-bd5b-47f9-8662-9e335330ec91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bsdqh_kube-system(e80ef963-bd5b-47f9-8662-9e335330ec91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"567048eac8496be443dfe015534515a10c3e8d7a60b323adaa340232fe56654d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bsdqh" podUID="e80ef963-bd5b-47f9-8662-9e335330ec91" Jun 20 18:24:55.783933 containerd[2019]: time="2025-06-20T18:24:55.783660745Z" level=error msg="Failed to destroy network for sandbox \"a573f5f7e266718c43f66e6b1ad784cb609996ed07ed847dc902c9324e360b4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.789859 containerd[2019]: time="2025-06-20T18:24:55.789781597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqdqt,Uid:d73292d8-ab80-4249-9e4c-886ba65bc635,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a573f5f7e266718c43f66e6b1ad784cb609996ed07ed847dc902c9324e360b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.791401 kubelet[3506]: E0620 18:24:55.790333 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a573f5f7e266718c43f66e6b1ad784cb609996ed07ed847dc902c9324e360b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.791401 kubelet[3506]: E0620 18:24:55.791121 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a573f5f7e266718c43f66e6b1ad784cb609996ed07ed847dc902c9324e360b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xqdqt" Jun 20 18:24:55.791401 kubelet[3506]: E0620 18:24:55.791158 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a573f5f7e266718c43f66e6b1ad784cb609996ed07ed847dc902c9324e360b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xqdqt" Jun 20 18:24:55.791739 kubelet[3506]: E0620 18:24:55.791301 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xqdqt_kube-system(d73292d8-ab80-4249-9e4c-886ba65bc635)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xqdqt_kube-system(d73292d8-ab80-4249-9e4c-886ba65bc635)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a573f5f7e266718c43f66e6b1ad784cb609996ed07ed847dc902c9324e360b4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xqdqt" podUID="d73292d8-ab80-4249-9e4c-886ba65bc635" Jun 20 18:24:55.795833 containerd[2019]: time="2025-06-20T18:24:55.795646249Z" level=error msg="Failed to destroy network for sandbox \"5e700fff47e612e5fa05d07f4cdc8ea86a2bb9c668c71f9fdf67fed118961daa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.803215 containerd[2019]: time="2025-06-20T18:24:55.803129185Z" level=error msg="Failed to destroy network for sandbox \"80b6b97abdf9cf90e91bb474fb39a890983f7b63b8822cbafdaf6fc0c413d8ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.809313 containerd[2019]: time="2025-06-20T18:24:55.809123809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86696f5875-4m9rw,Uid:e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e700fff47e612e5fa05d07f4cdc8ea86a2bb9c668c71f9fdf67fed118961daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.810130 kubelet[3506]: E0620 18:24:55.810041 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e700fff47e612e5fa05d07f4cdc8ea86a2bb9c668c71f9fdf67fed118961daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.810130 kubelet[3506]: E0620 18:24:55.810123 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e700fff47e612e5fa05d07f4cdc8ea86a2bb9c668c71f9fdf67fed118961daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86696f5875-4m9rw" Jun 20 18:24:55.810691 kubelet[3506]: E0620 18:24:55.810159 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e700fff47e612e5fa05d07f4cdc8ea86a2bb9c668c71f9fdf67fed118961daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86696f5875-4m9rw" Jun 20 18:24:55.810691 kubelet[3506]: E0620 18:24:55.810509 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86696f5875-4m9rw_calico-system(e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86696f5875-4m9rw_calico-system(e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e700fff47e612e5fa05d07f4cdc8ea86a2bb9c668c71f9fdf67fed118961daa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86696f5875-4m9rw" podUID="e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84" Jun 20 18:24:55.814477 containerd[2019]: time="2025-06-20T18:24:55.814241185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-9nwnw,Uid:e81c6d23-91be-4c5a-a154-8ce97a0e9ef7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80b6b97abdf9cf90e91bb474fb39a890983f7b63b8822cbafdaf6fc0c413d8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.815734 kubelet[3506]: E0620 18:24:55.815282 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80b6b97abdf9cf90e91bb474fb39a890983f7b63b8822cbafdaf6fc0c413d8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.816579 kubelet[3506]: E0620 18:24:55.815479 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80b6b97abdf9cf90e91bb474fb39a890983f7b63b8822cbafdaf6fc0c413d8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fd8789dd-9nwnw" Jun 20 18:24:55.816579 kubelet[3506]: E0620 18:24:55.816117 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80b6b97abdf9cf90e91bb474fb39a890983f7b63b8822cbafdaf6fc0c413d8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fd8789dd-9nwnw" Jun 20 18:24:55.817112 kubelet[3506]: E0620 18:24:55.816940 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65fd8789dd-9nwnw_calico-apiserver(e81c6d23-91be-4c5a-a154-8ce97a0e9ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65fd8789dd-9nwnw_calico-apiserver(e81c6d23-91be-4c5a-a154-8ce97a0e9ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80b6b97abdf9cf90e91bb474fb39a890983f7b63b8822cbafdaf6fc0c413d8ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fd8789dd-9nwnw" podUID="e81c6d23-91be-4c5a-a154-8ce97a0e9ef7" Jun 20 18:24:55.823010 containerd[2019]: time="2025-06-20T18:24:55.822332041Z" level=error msg="Failed to destroy network for sandbox \"e9243a23645fb95d9d98fd1dc9abbdc3992659974b789c3d4233f594cd0f8c16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.837620 containerd[2019]: time="2025-06-20T18:24:55.837215425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-k4vsj,Uid:9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9243a23645fb95d9d98fd1dc9abbdc3992659974b789c3d4233f594cd0f8c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.839663 kubelet[3506]: E0620 18:24:55.839544 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9243a23645fb95d9d98fd1dc9abbdc3992659974b789c3d4233f594cd0f8c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.839663 kubelet[3506]: E0620 18:24:55.839643 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9243a23645fb95d9d98fd1dc9abbdc3992659974b789c3d4233f594cd0f8c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fd8789dd-k4vsj" Jun 20 18:24:55.842493 kubelet[3506]: E0620 18:24:55.839679 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9243a23645fb95d9d98fd1dc9abbdc3992659974b789c3d4233f594cd0f8c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fd8789dd-k4vsj" Jun 20 18:24:55.842493 kubelet[3506]: E0620 18:24:55.839765 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65fd8789dd-k4vsj_calico-apiserver(9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65fd8789dd-k4vsj_calico-apiserver(9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9243a23645fb95d9d98fd1dc9abbdc3992659974b789c3d4233f594cd0f8c16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fd8789dd-k4vsj" podUID="9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5" Jun 20 18:24:55.849782 containerd[2019]: time="2025-06-20T18:24:55.849545557Z" level=error msg="Failed to destroy network for sandbox \"a4075abf342a80e59954e5bbb44df0d5ebe4ec29211a1402a87ecb5317a3bdd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.853204 containerd[2019]: time="2025-06-20T18:24:55.853110217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6977c77976-87mkt,Uid:f2e148c1-b7bb-4f74-a08c-d254941412bc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4075abf342a80e59954e5bbb44df0d5ebe4ec29211a1402a87ecb5317a3bdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.854323 kubelet[3506]: E0620 18:24:55.853605 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4075abf342a80e59954e5bbb44df0d5ebe4ec29211a1402a87ecb5317a3bdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.854323 kubelet[3506]: E0620 18:24:55.853723 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4075abf342a80e59954e5bbb44df0d5ebe4ec29211a1402a87ecb5317a3bdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6977c77976-87mkt" Jun 20 18:24:55.854323 kubelet[3506]: E0620 18:24:55.853762 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4075abf342a80e59954e5bbb44df0d5ebe4ec29211a1402a87ecb5317a3bdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6977c77976-87mkt" Jun 20 18:24:55.854659 kubelet[3506]: E0620 18:24:55.853915 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6977c77976-87mkt_calico-system(f2e148c1-b7bb-4f74-a08c-d254941412bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6977c77976-87mkt_calico-system(f2e148c1-b7bb-4f74-a08c-d254941412bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4075abf342a80e59954e5bbb44df0d5ebe4ec29211a1402a87ecb5317a3bdd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6977c77976-87mkt" podUID="f2e148c1-b7bb-4f74-a08c-d254941412bc" Jun 20 18:24:55.870907 containerd[2019]: time="2025-06-20T18:24:55.870795121Z" level=error msg="Failed to destroy network for sandbox \"5262fd80279af388807b42ba8d9b63b61f3a56f61009976fb8e7d209ae9aa344\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.874183 containerd[2019]: time="2025-06-20T18:24:55.874066489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599868b7d5-p9dk8,Uid:2c4451a6-cf44-4ddd-b939-50431d8471fd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5262fd80279af388807b42ba8d9b63b61f3a56f61009976fb8e7d209ae9aa344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.874930 kubelet[3506]: E0620 18:24:55.874759 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5262fd80279af388807b42ba8d9b63b61f3a56f61009976fb8e7d209ae9aa344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.874930 kubelet[3506]: E0620 18:24:55.874838 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5262fd80279af388807b42ba8d9b63b61f3a56f61009976fb8e7d209ae9aa344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599868b7d5-p9dk8" Jun 20 18:24:55.874930 kubelet[3506]: E0620 18:24:55.874872 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5262fd80279af388807b42ba8d9b63b61f3a56f61009976fb8e7d209ae9aa344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599868b7d5-p9dk8" Jun 20 18:24:55.876171 kubelet[3506]: E0620 18:24:55.875252 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599868b7d5-p9dk8_calico-apiserver(2c4451a6-cf44-4ddd-b939-50431d8471fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599868b7d5-p9dk8_calico-apiserver(2c4451a6-cf44-4ddd-b939-50431d8471fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5262fd80279af388807b42ba8d9b63b61f3a56f61009976fb8e7d209ae9aa344\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599868b7d5-p9dk8" podUID="2c4451a6-cf44-4ddd-b939-50431d8471fd" Jun 20 18:24:55.892785 containerd[2019]: time="2025-06-20T18:24:55.892624513Z" level=error msg="Failed to destroy network for sandbox \"766610f7a7de21032014611399eeaba9509456918f83795a0d3121b518e4c51a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.895226 containerd[2019]: time="2025-06-20T18:24:55.895116229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5d28l,Uid:71089506-2ab6-49c2-9e3f-bb88117d854f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"766610f7a7de21032014611399eeaba9509456918f83795a0d3121b518e4c51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.895692 kubelet[3506]: E0620 18:24:55.895645 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766610f7a7de21032014611399eeaba9509456918f83795a0d3121b518e4c51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:55.895876 kubelet[3506]: E0620 18:24:55.895845 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766610f7a7de21032014611399eeaba9509456918f83795a0d3121b518e4c51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-5d28l" Jun 20 18:24:55.895999 kubelet[3506]: E0620 18:24:55.895970 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766610f7a7de21032014611399eeaba9509456918f83795a0d3121b518e4c51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-5d28l" Jun 20 18:24:55.896253 kubelet[3506]: E0620 18:24:55.896188 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-5d28l_calico-system(71089506-2ab6-49c2-9e3f-bb88117d854f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-5d28l_calico-system(71089506-2ab6-49c2-9e3f-bb88117d854f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"766610f7a7de21032014611399eeaba9509456918f83795a0d3121b518e4c51a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-5d28l" podUID="71089506-2ab6-49c2-9e3f-bb88117d854f" Jun 20 18:24:56.053363 systemd[1]: Created slice kubepods-besteffort-podb8b64f5a_c146_4322_982b_92c0687fe966.slice - libcontainer container kubepods-besteffort-podb8b64f5a_c146_4322_982b_92c0687fe966.slice. Jun 20 18:24:56.059854 containerd[2019]: time="2025-06-20T18:24:56.059466790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4g8ft,Uid:b8b64f5a-c146-4322-982b-92c0687fe966,Namespace:calico-system,Attempt:0,}" Jun 20 18:24:56.156159 containerd[2019]: time="2025-06-20T18:24:56.156093047Z" level=error msg="Failed to destroy network for sandbox \"8f24cfb9c77bccf0bd7bc1dcac6c4f95b0cbb01f7354800aac758657e4cdb191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:56.158733 containerd[2019]: time="2025-06-20T18:24:56.158658239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4g8ft,Uid:b8b64f5a-c146-4322-982b-92c0687fe966,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f24cfb9c77bccf0bd7bc1dcac6c4f95b0cbb01f7354800aac758657e4cdb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:56.159029 kubelet[3506]: E0620 18:24:56.158966 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f24cfb9c77bccf0bd7bc1dcac6c4f95b0cbb01f7354800aac758657e4cdb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:24:56.159639 kubelet[3506]: E0620 18:24:56.159041 3506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f24cfb9c77bccf0bd7bc1dcac6c4f95b0cbb01f7354800aac758657e4cdb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4g8ft" Jun 20 18:24:56.159639 kubelet[3506]: E0620 18:24:56.159076 3506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f24cfb9c77bccf0bd7bc1dcac6c4f95b0cbb01f7354800aac758657e4cdb191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4g8ft" Jun 20 18:24:56.159639 kubelet[3506]: E0620 18:24:56.159149 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4g8ft_calico-system(b8b64f5a-c146-4322-982b-92c0687fe966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4g8ft_calico-system(b8b64f5a-c146-4322-982b-92c0687fe966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f24cfb9c77bccf0bd7bc1dcac6c4f95b0cbb01f7354800aac758657e4cdb191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4g8ft" podUID="b8b64f5a-c146-4322-982b-92c0687fe966" Jun 20 18:24:56.447466 systemd[1]: run-netns-cni\x2d2e9fbaee\x2df08f\x2d23a0\x2d9d95\x2df85511036c6d.mount: Deactivated successfully. Jun 20 18:24:56.448358 systemd[1]: run-netns-cni\x2d199f1052\x2d8926\x2da323\x2d3ecd\x2d3b961a4b48a7.mount: Deactivated successfully. Jun 20 18:24:56.448826 systemd[1]: run-netns-cni\x2dda256c4a\x2d3e56\x2d155b\x2dbae1\x2d0649214812fc.mount: Deactivated successfully. Jun 20 18:24:56.449198 systemd[1]: run-netns-cni\x2db8c0bcdd\x2df933\x2d8e3a\x2d5c11\x2dd6d292ab2d3e.mount: Deactivated successfully. Jun 20 18:24:56.449502 systemd[1]: run-netns-cni\x2d9e4cbfb2\x2d628c\x2dbb0a\x2ddfae\x2d501bd9ca3f20.mount: Deactivated successfully. Jun 20 18:24:56.449838 systemd[1]: run-netns-cni\x2d3404dbad\x2d9c6b\x2d5106\x2d85ae\x2dd6ba14c21aa7.mount: Deactivated successfully. Jun 20 18:24:56.450089 systemd[1]: run-netns-cni\x2d5b91ce2b\x2da8b8\x2d92b2\x2d694a\x2d82afdacb12a5.mount: Deactivated successfully. Jun 20 18:25:01.573685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119557998.mount: Deactivated successfully. Jun 20 18:25:01.628756 containerd[2019]: time="2025-06-20T18:25:01.628683690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:01.630921 containerd[2019]: time="2025-06-20T18:25:01.630841914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=150542367" Jun 20 18:25:01.633202 containerd[2019]: time="2025-06-20T18:25:01.633122946Z" level=info msg="ImageCreate event name:\"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:01.637444 containerd[2019]: time="2025-06-20T18:25:01.637378998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:01.638630 containerd[2019]: time="2025-06-20T18:25:01.638532390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"150542229\" in 6.314192119s" Jun 20 18:25:01.638737 containerd[2019]: time="2025-06-20T18:25:01.638628006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\"" Jun 20 18:25:01.688830 containerd[2019]: time="2025-06-20T18:25:01.688773270Z" level=info msg="CreateContainer within sandbox \"7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 18:25:01.710380 containerd[2019]: time="2025-06-20T18:25:01.707704662Z" level=info msg="Container 2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:01.731311 containerd[2019]: time="2025-06-20T18:25:01.731257146Z" level=info msg="CreateContainer within sandbox \"7f7f0e6977aef9fc03b3e6277c67b1f3628f28820a2cc16317142929f841d766\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\"" Jun 20 18:25:01.733450 containerd[2019]: time="2025-06-20T18:25:01.733374594Z" level=info msg="StartContainer for \"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\"" Jun 20 18:25:01.737592 containerd[2019]: time="2025-06-20T18:25:01.736746366Z" level=info msg="connecting to shim 2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb" address="unix:///run/containerd/s/027ef5a723aa4afe3e10dea2c0a073ed710bd8c4c75b985c485214cfa8fff1b8" protocol=ttrpc version=3 Jun 20 18:25:01.812688 systemd[1]: Started cri-containerd-2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb.scope - libcontainer container 2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb. Jun 20 18:25:01.920793 containerd[2019]: time="2025-06-20T18:25:01.920632315Z" level=info msg="StartContainer for \"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\" returns successfully" Jun 20 18:25:02.073950 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 18:25:02.074073 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 18:25:02.456274 kubelet[3506]: I0620 18:25:02.454045 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-ca-bundle\") pod \"f2e148c1-b7bb-4f74-a08c-d254941412bc\" (UID: \"f2e148c1-b7bb-4f74-a08c-d254941412bc\") " Jun 20 18:25:02.456274 kubelet[3506]: I0620 18:25:02.454184 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zphnv\" (UniqueName: \"kubernetes.io/projected/f2e148c1-b7bb-4f74-a08c-d254941412bc-kube-api-access-zphnv\") pod \"f2e148c1-b7bb-4f74-a08c-d254941412bc\" (UID: \"f2e148c1-b7bb-4f74-a08c-d254941412bc\") " Jun 20 18:25:02.456274 kubelet[3506]: I0620 18:25:02.454535 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-backend-key-pair\") pod \"f2e148c1-b7bb-4f74-a08c-d254941412bc\" (UID: \"f2e148c1-b7bb-4f74-a08c-d254941412bc\") " Jun 20 18:25:02.456274 kubelet[3506]: I0620 18:25:02.454876 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2e148c1-b7bb-4f74-a08c-d254941412bc" (UID: "f2e148c1-b7bb-4f74-a08c-d254941412bc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:25:02.470160 kubelet[3506]: I0620 18:25:02.470102 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2e148c1-b7bb-4f74-a08c-d254941412bc" (UID: "f2e148c1-b7bb-4f74-a08c-d254941412bc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:25:02.471683 kubelet[3506]: I0620 18:25:02.471615 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e148c1-b7bb-4f74-a08c-d254941412bc-kube-api-access-zphnv" (OuterVolumeSpecName: "kube-api-access-zphnv") pod "f2e148c1-b7bb-4f74-a08c-d254941412bc" (UID: "f2e148c1-b7bb-4f74-a08c-d254941412bc"). InnerVolumeSpecName "kube-api-access-zphnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:25:02.555986 kubelet[3506]: I0620 18:25:02.555915 3506 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-backend-key-pair\") on node \"ip-172-31-21-135\" DevicePath \"\"" Jun 20 18:25:02.555986 kubelet[3506]: I0620 18:25:02.555976 3506 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e148c1-b7bb-4f74-a08c-d254941412bc-whisker-ca-bundle\") on node \"ip-172-31-21-135\" DevicePath \"\"" Jun 20 18:25:02.556233 kubelet[3506]: I0620 18:25:02.556001 3506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zphnv\" (UniqueName: \"kubernetes.io/projected/f2e148c1-b7bb-4f74-a08c-d254941412bc-kube-api-access-zphnv\") on node \"ip-172-31-21-135\" DevicePath \"\"" Jun 20 18:25:02.574555 systemd[1]: var-lib-kubelet-pods-f2e148c1\x2db7bb\x2d4f74\x2da08c\x2dd254941412bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzphnv.mount: Deactivated successfully. Jun 20 18:25:02.574751 systemd[1]: var-lib-kubelet-pods-f2e148c1\x2db7bb\x2d4f74\x2da08c\x2dd254941412bc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 18:25:02.666313 systemd[1]: Removed slice kubepods-besteffort-podf2e148c1_b7bb_4f74_a08c_d254941412bc.slice - libcontainer container kubepods-besteffort-podf2e148c1_b7bb_4f74_a08c_d254941412bc.slice. Jun 20 18:25:02.700380 kubelet[3506]: I0620 18:25:02.698409 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p2gr6" podStartSLOduration=1.872835735 podStartE2EDuration="18.698374051s" podCreationTimestamp="2025-06-20 18:24:44 +0000 UTC" firstStartedPulling="2025-06-20 18:24:44.815234066 +0000 UTC m=+28.113815168" lastFinishedPulling="2025-06-20 18:25:01.640772382 +0000 UTC m=+44.939353484" observedRunningTime="2025-06-20 18:25:02.400496946 +0000 UTC m=+45.699078060" watchObservedRunningTime="2025-06-20 18:25:02.698374051 +0000 UTC m=+45.996955177" Jun 20 18:25:02.812769 systemd[1]: Created slice kubepods-besteffort-pod415c5ed5_43e0_4c45_8eb9_f767547027f5.slice - libcontainer container kubepods-besteffort-pod415c5ed5_43e0_4c45_8eb9_f767547027f5.slice. Jun 20 18:25:02.859103 kubelet[3506]: I0620 18:25:02.859053 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/415c5ed5-43e0-4c45-8eb9-f767547027f5-whisker-backend-key-pair\") pod \"whisker-c8955bc85-w9ldc\" (UID: \"415c5ed5-43e0-4c45-8eb9-f767547027f5\") " pod="calico-system/whisker-c8955bc85-w9ldc" Jun 20 18:25:02.859706 kubelet[3506]: I0620 18:25:02.859664 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415c5ed5-43e0-4c45-8eb9-f767547027f5-whisker-ca-bundle\") pod \"whisker-c8955bc85-w9ldc\" (UID: \"415c5ed5-43e0-4c45-8eb9-f767547027f5\") " pod="calico-system/whisker-c8955bc85-w9ldc" Jun 20 18:25:02.860062 kubelet[3506]: I0620 18:25:02.859936 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5vf\" (UniqueName: \"kubernetes.io/projected/415c5ed5-43e0-4c45-8eb9-f767547027f5-kube-api-access-nr5vf\") pod \"whisker-c8955bc85-w9ldc\" (UID: \"415c5ed5-43e0-4c45-8eb9-f767547027f5\") " pod="calico-system/whisker-c8955bc85-w9ldc" Jun 20 18:25:03.045117 kubelet[3506]: I0620 18:25:03.045052 3506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e148c1-b7bb-4f74-a08c-d254941412bc" path="/var/lib/kubelet/pods/f2e148c1-b7bb-4f74-a08c-d254941412bc/volumes" Jun 20 18:25:03.123725 containerd[2019]: time="2025-06-20T18:25:03.122820437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c8955bc85-w9ldc,Uid:415c5ed5-43e0-4c45-8eb9-f767547027f5,Namespace:calico-system,Attempt:0,}" Jun 20 18:25:03.359154 kubelet[3506]: I0620 18:25:03.359112 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:25:03.392025 (udev-worker)[4596]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:25:03.395037 systemd-networkd[1817]: calife2f74cc8ae: Link UP Jun 20 18:25:03.396136 systemd-networkd[1817]: calife2f74cc8ae: Gained carrier Jun 20 18:25:03.424144 containerd[2019]: 2025-06-20 18:25:03.175 [INFO][4624] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 18:25:03.424144 containerd[2019]: 2025-06-20 18:25:03.251 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0 whisker-c8955bc85- calico-system 415c5ed5-43e0-4c45-8eb9-f767547027f5 916 0 2025-06-20 18:25:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c8955bc85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-21-135 whisker-c8955bc85-w9ldc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calife2f74cc8ae [] [] }} ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-" Jun 20 18:25:03.424144 containerd[2019]: 2025-06-20 18:25:03.251 [INFO][4624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" Jun 20 18:25:03.424144 containerd[2019]: 2025-06-20 18:25:03.308 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" HandleID="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Workload="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.308 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" HandleID="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Workload="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-135", "pod":"whisker-c8955bc85-w9ldc", "timestamp":"2025-06-20 18:25:03.308306382 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.309 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.309 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.309 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.324 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" host="ip-172-31-21-135" Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.333 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.343 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.346 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:03.424662 containerd[2019]: 2025-06-20 18:25:03.350 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:03.425085 containerd[2019]: 2025-06-20 18:25:03.351 [INFO][4637] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" host="ip-172-31-21-135" Jun 20 18:25:03.425085 containerd[2019]: 2025-06-20 18:25:03.354 [INFO][4637] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595 Jun 20 18:25:03.425085 containerd[2019]: 2025-06-20 18:25:03.363 [INFO][4637] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" host="ip-172-31-21-135" Jun 20 18:25:03.425085 containerd[2019]: 2025-06-20 18:25:03.375 [INFO][4637] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.129/26] block=192.168.34.128/26 handle="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" host="ip-172-31-21-135" Jun 20 18:25:03.425085 containerd[2019]: 2025-06-20 18:25:03.375 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.129/26] handle="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" host="ip-172-31-21-135" Jun 20 18:25:03.425085 containerd[2019]: 2025-06-20 18:25:03.375 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:03.425085 containerd[2019]: 2025-06-20 18:25:03.375 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.129/26] IPv6=[] ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" HandleID="k8s-pod-network.c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Workload="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" Jun 20 18:25:03.425996 containerd[2019]: 2025-06-20 18:25:03.381 [INFO][4624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0", GenerateName:"whisker-c8955bc85-", Namespace:"calico-system", SelfLink:"", UID:"415c5ed5-43e0-4c45-8eb9-f767547027f5", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 25, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c8955bc85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"whisker-c8955bc85-w9ldc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife2f74cc8ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:03.425996 containerd[2019]: 2025-06-20 18:25:03.381 [INFO][4624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.129/32] ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" Jun 20 18:25:03.426514 containerd[2019]: 2025-06-20 18:25:03.381 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife2f74cc8ae ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" Jun 20 18:25:03.426514 containerd[2019]: 2025-06-20 18:25:03.396 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" Jun 20 18:25:03.426690 containerd[2019]: 2025-06-20 18:25:03.396 [INFO][4624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0", GenerateName:"whisker-c8955bc85-", Namespace:"calico-system", SelfLink:"", UID:"415c5ed5-43e0-4c45-8eb9-f767547027f5", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 25, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c8955bc85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595", Pod:"whisker-c8955bc85-w9ldc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife2f74cc8ae", MAC:"1e:7a:f4:cc:d5:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:03.426875 containerd[2019]: 2025-06-20 18:25:03.420 [INFO][4624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" Namespace="calico-system" Pod="whisker-c8955bc85-w9ldc" WorkloadEndpoint="ip--172--31--21--135-k8s-whisker--c8955bc85--w9ldc-eth0" Jun 20 18:25:03.497743 containerd[2019]: time="2025-06-20T18:25:03.497644423Z" level=info msg="connecting to shim c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595" address="unix:///run/containerd/s/9989cebda1fc5942fffebcc1adfdc2d893d1c438c8e0b4dc2f786561f7fc6542" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:03.541679 systemd[1]: Started cri-containerd-c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595.scope - libcontainer container c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595. Jun 20 18:25:03.614264 containerd[2019]: time="2025-06-20T18:25:03.614183960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c8955bc85-w9ldc,Uid:415c5ed5-43e0-4c45-8eb9-f767547027f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595\"" Jun 20 18:25:03.622238 containerd[2019]: time="2025-06-20T18:25:03.622183040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 18:25:04.197116 kubelet[3506]: I0620 18:25:04.196480 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:25:05.196780 systemd-networkd[1817]: vxlan.calico: Link UP Jun 20 18:25:05.196795 systemd-networkd[1817]: vxlan.calico: Gained carrier Jun 20 18:25:05.197387 (udev-worker)[4597]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:25:05.268698 systemd-networkd[1817]: calife2f74cc8ae: Gained IPv6LL Jun 20 18:25:05.279678 kubelet[3506]: I0620 18:25:05.279560 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:25:05.579109 containerd[2019]: time="2025-06-20T18:25:05.579059662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\" id:\"a84632d89c032841bead0002367bff2b759b2607a2538898901b6f8a1d8dde9a\" pid:4868 exit_status:1 exited_at:{seconds:1750443905 nanos:578596354}" Jun 20 18:25:05.677165 containerd[2019]: time="2025-06-20T18:25:05.677079382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:05.679411 containerd[2019]: time="2025-06-20T18:25:05.679287466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4605623" Jun 20 18:25:05.682044 containerd[2019]: time="2025-06-20T18:25:05.681977602Z" level=info msg="ImageCreate event name:\"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:05.697044 containerd[2019]: time="2025-06-20T18:25:05.696414598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:05.710054 containerd[2019]: time="2025-06-20T18:25:05.709800310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"5974856\" in 2.087552818s" Jun 20 18:25:05.710388 containerd[2019]: time="2025-06-20T18:25:05.710210782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\"" Jun 20 18:25:05.723421 containerd[2019]: time="2025-06-20T18:25:05.723304618Z" level=info msg="CreateContainer within sandbox \"c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 18:25:05.754385 containerd[2019]: time="2025-06-20T18:25:05.754082326Z" level=info msg="Container 057ca5929b661e9dd9187b587f20a2536b8a9fb0f8eaa9623f33b4f0ef0fc9e0: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:05.777175 containerd[2019]: time="2025-06-20T18:25:05.777009947Z" level=info msg="CreateContainer within sandbox \"c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"057ca5929b661e9dd9187b587f20a2536b8a9fb0f8eaa9623f33b4f0ef0fc9e0\"" Jun 20 18:25:05.780525 containerd[2019]: time="2025-06-20T18:25:05.778473779Z" level=info msg="StartContainer for \"057ca5929b661e9dd9187b587f20a2536b8a9fb0f8eaa9623f33b4f0ef0fc9e0\"" Jun 20 18:25:05.790369 containerd[2019]: time="2025-06-20T18:25:05.790208627Z" level=info msg="connecting to shim 057ca5929b661e9dd9187b587f20a2536b8a9fb0f8eaa9623f33b4f0ef0fc9e0" address="unix:///run/containerd/s/9989cebda1fc5942fffebcc1adfdc2d893d1c438c8e0b4dc2f786561f7fc6542" protocol=ttrpc version=3 Jun 20 18:25:05.850911 systemd[1]: Started cri-containerd-057ca5929b661e9dd9187b587f20a2536b8a9fb0f8eaa9623f33b4f0ef0fc9e0.scope - libcontainer container 057ca5929b661e9dd9187b587f20a2536b8a9fb0f8eaa9623f33b4f0ef0fc9e0. Jun 20 18:25:05.873138 containerd[2019]: time="2025-06-20T18:25:05.873075239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\" id:\"e201f9a7b81f7c6df14538ff36a2642bfdb11f4b57138f2bbed8fe822b5a85d7\" pid:4894 exit_status:1 exited_at:{seconds:1750443905 nanos:872111027}" Jun 20 18:25:05.991311 containerd[2019]: time="2025-06-20T18:25:05.991250436Z" level=info msg="StartContainer for \"057ca5929b661e9dd9187b587f20a2536b8a9fb0f8eaa9623f33b4f0ef0fc9e0\" returns successfully" Jun 20 18:25:05.994069 containerd[2019]: time="2025-06-20T18:25:05.994005396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 18:25:07.043000 containerd[2019]: time="2025-06-20T18:25:07.042083037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599868b7d5-p9dk8,Uid:2c4451a6-cf44-4ddd-b939-50431d8471fd,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:25:07.252591 systemd-networkd[1817]: vxlan.calico: Gained IPv6LL Jun 20 18:25:07.350565 (udev-worker)[4853]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:25:07.362645 systemd-networkd[1817]: cali57745178ecf: Link UP Jun 20 18:25:07.369894 systemd-networkd[1817]: cali57745178ecf: Gained carrier Jun 20 18:25:07.426892 containerd[2019]: 2025-06-20 18:25:07.125 [INFO][4976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0 calico-apiserver-599868b7d5- calico-apiserver 2c4451a6-cf44-4ddd-b939-50431d8471fd 851 0 2025-06-20 18:24:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599868b7d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-135 calico-apiserver-599868b7d5-p9dk8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57745178ecf [] [] }} ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-" Jun 20 18:25:07.426892 containerd[2019]: 2025-06-20 18:25:07.125 [INFO][4976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" Jun 20 18:25:07.426892 containerd[2019]: 2025-06-20 18:25:07.189 [INFO][4987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" HandleID="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Workload="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.189 [INFO][4987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" HandleID="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Workload="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d39f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-135", "pod":"calico-apiserver-599868b7d5-p9dk8", "timestamp":"2025-06-20 18:25:07.189632086 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.189 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.190 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.190 [INFO][4987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.208 [INFO][4987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" host="ip-172-31-21-135" Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.219 [INFO][4987] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.231 [INFO][4987] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.243 [INFO][4987] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:07.429446 containerd[2019]: 2025-06-20 18:25:07.264 [INFO][4987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:07.430451 containerd[2019]: 2025-06-20 18:25:07.264 [INFO][4987] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" host="ip-172-31-21-135" Jun 20 18:25:07.430451 containerd[2019]: 2025-06-20 18:25:07.274 [INFO][4987] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7 Jun 20 18:25:07.430451 containerd[2019]: 2025-06-20 18:25:07.305 [INFO][4987] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" host="ip-172-31-21-135" Jun 20 18:25:07.430451 containerd[2019]: 2025-06-20 18:25:07.334 [INFO][4987] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.130/26] block=192.168.34.128/26 handle="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" host="ip-172-31-21-135" Jun 20 18:25:07.430451 containerd[2019]: 2025-06-20 18:25:07.334 [INFO][4987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.130/26] handle="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" host="ip-172-31-21-135" Jun 20 18:25:07.430451 containerd[2019]: 2025-06-20 18:25:07.336 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:07.430451 containerd[2019]: 2025-06-20 18:25:07.336 [INFO][4987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.130/26] IPv6=[] ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" HandleID="k8s-pod-network.2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Workload="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" Jun 20 18:25:07.430788 containerd[2019]: 2025-06-20 18:25:07.340 [INFO][4976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0", GenerateName:"calico-apiserver-599868b7d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c4451a6-cf44-4ddd-b939-50431d8471fd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599868b7d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"calico-apiserver-599868b7d5-p9dk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57745178ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:07.430990 containerd[2019]: 2025-06-20 18:25:07.341 [INFO][4976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.130/32] ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" Jun 20 18:25:07.430990 containerd[2019]: 2025-06-20 18:25:07.342 [INFO][4976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57745178ecf ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" Jun 20 18:25:07.430990 containerd[2019]: 2025-06-20 18:25:07.372 [INFO][4976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" Jun 20 18:25:07.431210 containerd[2019]: 2025-06-20 18:25:07.373 [INFO][4976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0", GenerateName:"calico-apiserver-599868b7d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c4451a6-cf44-4ddd-b939-50431d8471fd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599868b7d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7", Pod:"calico-apiserver-599868b7d5-p9dk8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57745178ecf", MAC:"82:e1:69:67:1e:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:07.431441 containerd[2019]: 2025-06-20 18:25:07.416 [INFO][4976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-p9dk8" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--p9dk8-eth0" Jun 20 18:25:07.552539 containerd[2019]: time="2025-06-20T18:25:07.552463031Z" level=info msg="connecting to shim 2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7" address="unix:///run/containerd/s/e93dbb227e6666dd82a83e00d8454877c868c0771fed2b55f7961af451d8106b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:07.667488 systemd[1]: Started cri-containerd-2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7.scope - libcontainer container 2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7. Jun 20 18:25:07.979363 containerd[2019]: time="2025-06-20T18:25:07.978871705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599868b7d5-p9dk8,Uid:2c4451a6-cf44-4ddd-b939-50431d8471fd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7\"" Jun 20 18:25:08.534452 systemd-networkd[1817]: cali57745178ecf: Gained IPv6LL Jun 20 18:25:08.548442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353477855.mount: Deactivated successfully. Jun 20 18:25:08.589413 containerd[2019]: time="2025-06-20T18:25:08.588775188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:08.591258 containerd[2019]: time="2025-06-20T18:25:08.591135996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=30829716" Jun 20 18:25:08.593890 containerd[2019]: time="2025-06-20T18:25:08.593807928Z" level=info msg="ImageCreate event name:\"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:08.598296 containerd[2019]: time="2025-06-20T18:25:08.598187581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:08.599916 containerd[2019]: time="2025-06-20T18:25:08.599615053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"30829546\" in 2.605543429s" Jun 20 18:25:08.599916 containerd[2019]: time="2025-06-20T18:25:08.599675449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\"" Jun 20 18:25:08.602754 containerd[2019]: time="2025-06-20T18:25:08.602472397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 18:25:08.608859 containerd[2019]: time="2025-06-20T18:25:08.608804533Z" level=info msg="CreateContainer within sandbox \"c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 18:25:08.628372 containerd[2019]: time="2025-06-20T18:25:08.626264233Z" level=info msg="Container 2861347d39036988d062313b154b6e1576c787e31df162906478cb58137a4f97: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:08.651964 containerd[2019]: time="2025-06-20T18:25:08.651864061Z" level=info msg="CreateContainer within sandbox \"c0f8027898e23fc187a9228385a48d050c5c362f8c14d678cb99c10d72f09595\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2861347d39036988d062313b154b6e1576c787e31df162906478cb58137a4f97\"" Jun 20 18:25:08.653597 containerd[2019]: time="2025-06-20T18:25:08.653535841Z" level=info msg="StartContainer for \"2861347d39036988d062313b154b6e1576c787e31df162906478cb58137a4f97\"" Jun 20 18:25:08.658538 containerd[2019]: time="2025-06-20T18:25:08.658405657Z" level=info msg="connecting to shim 2861347d39036988d062313b154b6e1576c787e31df162906478cb58137a4f97" address="unix:///run/containerd/s/9989cebda1fc5942fffebcc1adfdc2d893d1c438c8e0b4dc2f786561f7fc6542" protocol=ttrpc version=3 Jun 20 18:25:08.699654 systemd[1]: Started cri-containerd-2861347d39036988d062313b154b6e1576c787e31df162906478cb58137a4f97.scope - libcontainer container 2861347d39036988d062313b154b6e1576c787e31df162906478cb58137a4f97. Jun 20 18:25:08.794986 containerd[2019]: time="2025-06-20T18:25:08.794719045Z" level=info msg="StartContainer for \"2861347d39036988d062313b154b6e1576c787e31df162906478cb58137a4f97\" returns successfully" Jun 20 18:25:09.043612 containerd[2019]: time="2025-06-20T18:25:09.042653651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4g8ft,Uid:b8b64f5a-c146-4322-982b-92c0687fe966,Namespace:calico-system,Attempt:0,}" Jun 20 18:25:09.043612 containerd[2019]: time="2025-06-20T18:25:09.042799343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-k4vsj,Uid:9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:25:09.043612 containerd[2019]: time="2025-06-20T18:25:09.043004735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bsdqh,Uid:e80ef963-bd5b-47f9-8662-9e335330ec91,Namespace:kube-system,Attempt:0,}" Jun 20 18:25:09.044181 containerd[2019]: time="2025-06-20T18:25:09.044128943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5d28l,Uid:71089506-2ab6-49c2-9e3f-bb88117d854f,Namespace:calico-system,Attempt:0,}" Jun 20 18:25:09.520409 systemd-networkd[1817]: cali9e3d7078c42: Link UP Jun 20 18:25:09.524288 systemd-networkd[1817]: cali9e3d7078c42: Gained carrier Jun 20 18:25:09.574902 kubelet[3506]: I0620 18:25:09.574166 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-c8955bc85-w9ldc" podStartSLOduration=2.589669412 podStartE2EDuration="7.574143157s" podCreationTimestamp="2025-06-20 18:25:02 +0000 UTC" firstStartedPulling="2025-06-20 18:25:03.617161376 +0000 UTC m=+46.915742478" lastFinishedPulling="2025-06-20 18:25:08.601635037 +0000 UTC m=+51.900216223" observedRunningTime="2025-06-20 18:25:09.461396509 +0000 UTC m=+52.759977623" watchObservedRunningTime="2025-06-20 18:25:09.574143157 +0000 UTC m=+52.872724259" Jun 20 18:25:09.582222 containerd[2019]: 2025-06-20 18:25:09.212 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0 calico-apiserver-65fd8789dd- calico-apiserver 9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5 850 0 2025-06-20 18:24:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65fd8789dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-135 calico-apiserver-65fd8789dd-k4vsj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e3d7078c42 [] [] }} ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-" Jun 20 18:25:09.582222 containerd[2019]: 2025-06-20 18:25:09.213 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:09.582222 containerd[2019]: 2025-06-20 18:25:09.372 [INFO][5137] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.372 [INFO][5137] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-135", "pod":"calico-apiserver-65fd8789dd-k4vsj", "timestamp":"2025-06-20 18:25:09.37203042 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.372 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.372 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.372 [INFO][5137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.409 [INFO][5137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" host="ip-172-31-21-135" Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.436 [INFO][5137] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.446 [INFO][5137] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.458 [INFO][5137] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.583083 containerd[2019]: 2025-06-20 18:25:09.467 [INFO][5137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.584193 containerd[2019]: 2025-06-20 18:25:09.468 [INFO][5137] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" host="ip-172-31-21-135" Jun 20 18:25:09.584193 containerd[2019]: 2025-06-20 18:25:09.473 [INFO][5137] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a Jun 20 18:25:09.584193 containerd[2019]: 2025-06-20 18:25:09.486 [INFO][5137] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" host="ip-172-31-21-135" Jun 20 18:25:09.584193 containerd[2019]: 2025-06-20 18:25:09.503 [INFO][5137] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.131/26] block=192.168.34.128/26 handle="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" host="ip-172-31-21-135" Jun 20 18:25:09.584193 containerd[2019]: 2025-06-20 18:25:09.503 [INFO][5137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.131/26] handle="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" host="ip-172-31-21-135" Jun 20 18:25:09.584193 containerd[2019]: 2025-06-20 18:25:09.503 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:09.584193 containerd[2019]: 2025-06-20 18:25:09.503 [INFO][5137] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.131/26] IPv6=[] ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:09.585047 containerd[2019]: 2025-06-20 18:25:09.510 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0", GenerateName:"calico-apiserver-65fd8789dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd8789dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"calico-apiserver-65fd8789dd-k4vsj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3d7078c42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:09.585208 containerd[2019]: 2025-06-20 18:25:09.510 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.131/32] ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:09.585208 containerd[2019]: 2025-06-20 18:25:09.510 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e3d7078c42 ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:09.585208 containerd[2019]: 2025-06-20 18:25:09.526 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:09.585883 containerd[2019]: 2025-06-20 18:25:09.528 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0", GenerateName:"calico-apiserver-65fd8789dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd8789dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a", Pod:"calico-apiserver-65fd8789dd-k4vsj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3d7078c42", MAC:"0a:bb:3c:60:59:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:09.586143 containerd[2019]: 2025-06-20 18:25:09.576 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-k4vsj" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:09.682976 containerd[2019]: time="2025-06-20T18:25:09.682418390Z" level=info msg="connecting to shim 39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" address="unix:///run/containerd/s/cb77e093241493d4cdd3b0dd5e45b16bc08108496889f427900334a2652065af" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:09.738211 systemd-networkd[1817]: calid38b0c79fe6: Link UP Jun 20 18:25:09.740611 systemd-networkd[1817]: calid38b0c79fe6: Gained carrier Jun 20 18:25:09.786745 systemd[1]: Started cri-containerd-39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a.scope - libcontainer container 39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a. Jun 20 18:25:09.803190 containerd[2019]: 2025-06-20 18:25:09.267 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0 coredns-674b8bbfcf- kube-system e80ef963-bd5b-47f9-8662-9e335330ec91 846 0 2025-06-20 18:24:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-135 coredns-674b8bbfcf-bsdqh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid38b0c79fe6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-" Jun 20 18:25:09.803190 containerd[2019]: 2025-06-20 18:25:09.270 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" Jun 20 18:25:09.803190 containerd[2019]: 2025-06-20 18:25:09.394 [INFO][5147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" HandleID="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Workload="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.395 [INFO][5147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" HandleID="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Workload="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321450), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-135", "pod":"coredns-674b8bbfcf-bsdqh", "timestamp":"2025-06-20 18:25:09.394079664 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.396 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.503 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.504 [INFO][5147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.566 [INFO][5147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" host="ip-172-31-21-135" Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.583 [INFO][5147] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.601 [INFO][5147] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.609 [INFO][5147] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.803518 containerd[2019]: 2025-06-20 18:25:09.622 [INFO][5147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.804003 containerd[2019]: 2025-06-20 18:25:09.623 [INFO][5147] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" host="ip-172-31-21-135" Jun 20 18:25:09.804003 containerd[2019]: 2025-06-20 18:25:09.632 [INFO][5147] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af Jun 20 18:25:09.804003 containerd[2019]: 2025-06-20 18:25:09.654 [INFO][5147] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" host="ip-172-31-21-135" Jun 20 18:25:09.804003 containerd[2019]: 2025-06-20 18:25:09.681 [INFO][5147] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.132/26] block=192.168.34.128/26 handle="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" host="ip-172-31-21-135" Jun 20 18:25:09.804003 containerd[2019]: 2025-06-20 18:25:09.682 [INFO][5147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.132/26] handle="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" host="ip-172-31-21-135" Jun 20 18:25:09.804003 containerd[2019]: 2025-06-20 18:25:09.682 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:09.804003 containerd[2019]: 2025-06-20 18:25:09.683 [INFO][5147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.132/26] IPv6=[] ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" HandleID="k8s-pod-network.80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Workload="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" Jun 20 18:25:09.804322 containerd[2019]: 2025-06-20 18:25:09.709 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e80ef963-bd5b-47f9-8662-9e335330ec91", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"coredns-674b8bbfcf-bsdqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid38b0c79fe6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:09.804322 containerd[2019]: 2025-06-20 18:25:09.720 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.132/32] ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" Jun 20 18:25:09.804322 containerd[2019]: 2025-06-20 18:25:09.721 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid38b0c79fe6 ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" Jun 20 18:25:09.804322 containerd[2019]: 2025-06-20 18:25:09.742 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" Jun 20 18:25:09.804322 containerd[2019]: 2025-06-20 18:25:09.742 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e80ef963-bd5b-47f9-8662-9e335330ec91", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af", Pod:"coredns-674b8bbfcf-bsdqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid38b0c79fe6", MAC:"66:e6:f2:58:54:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:09.804322 containerd[2019]: 2025-06-20 18:25:09.782 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" Namespace="kube-system" Pod="coredns-674b8bbfcf-bsdqh" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--bsdqh-eth0" Jun 20 18:25:09.878576 systemd-networkd[1817]: cali4029942277d: Link UP Jun 20 18:25:09.882411 systemd-networkd[1817]: cali4029942277d: Gained carrier Jun 20 18:25:09.910895 containerd[2019]: time="2025-06-20T18:25:09.910418427Z" level=info msg="connecting to shim 80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af" address="unix:///run/containerd/s/0d1de4bc9b038ce9bb964731a3571bd7dcf126066bb91bef234d1a42d466848c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.261 [INFO][5088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0 csi-node-driver- calico-system b8b64f5a-c146-4322-982b-92c0687fe966 716 0 2025-06-20 18:24:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-135 csi-node-driver-4g8ft eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4029942277d [] [] }} ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.265 [INFO][5088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.416 [INFO][5144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" HandleID="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Workload="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.417 [INFO][5144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" HandleID="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Workload="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000376a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-135", "pod":"csi-node-driver-4g8ft", "timestamp":"2025-06-20 18:25:09.416520553 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.417 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.684 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.684 [INFO][5144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.726 [INFO][5144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.750 [INFO][5144] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.784 [INFO][5144] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.798 [INFO][5144] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.808 [INFO][5144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.809 [INFO][5144] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.817 [INFO][5144] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34 Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.827 [INFO][5144] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.842 [INFO][5144] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.133/26] block=192.168.34.128/26 handle="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.843 [INFO][5144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.133/26] handle="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" host="ip-172-31-21-135" Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.843 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:09.955238 containerd[2019]: 2025-06-20 18:25:09.844 [INFO][5144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.133/26] IPv6=[] ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" HandleID="k8s-pod-network.28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Workload="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" Jun 20 18:25:09.956564 containerd[2019]: 2025-06-20 18:25:09.867 [INFO][5088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8b64f5a-c146-4322-982b-92c0687fe966", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"csi-node-driver-4g8ft", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4029942277d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:09.956564 containerd[2019]: 2025-06-20 18:25:09.868 [INFO][5088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.133/32] ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" Jun 20 18:25:09.956564 containerd[2019]: 2025-06-20 18:25:09.869 [INFO][5088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4029942277d ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" Jun 20 18:25:09.956564 containerd[2019]: 2025-06-20 18:25:09.887 [INFO][5088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" Jun 20 18:25:09.956564 containerd[2019]: 2025-06-20 18:25:09.896 [INFO][5088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8b64f5a-c146-4322-982b-92c0687fe966", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34", Pod:"csi-node-driver-4g8ft", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4029942277d", MAC:"2a:5a:3b:17:a2:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:09.956564 containerd[2019]: 2025-06-20 18:25:09.939 [INFO][5088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" Namespace="calico-system" Pod="csi-node-driver-4g8ft" WorkloadEndpoint="ip--172--31--21--135-k8s-csi--node--driver--4g8ft-eth0" Jun 20 18:25:10.041510 containerd[2019]: time="2025-06-20T18:25:10.041149080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86696f5875-4m9rw,Uid:e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84,Namespace:calico-system,Attempt:0,}" Jun 20 18:25:10.047109 systemd[1]: Started cri-containerd-80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af.scope - libcontainer container 80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af. Jun 20 18:25:10.133776 systemd-networkd[1817]: cali4af9fff2f85: Link UP Jun 20 18:25:10.138883 systemd-networkd[1817]: cali4af9fff2f85: Gained carrier Jun 20 18:25:10.156235 containerd[2019]: time="2025-06-20T18:25:10.156177336Z" level=info msg="connecting to shim 28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34" address="unix:///run/containerd/s/84c3c38bfaabb913596eb455b93f8ea4bcf734a45bdf1fae9069738e3cd517b3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:10.253654 systemd[1]: Started cri-containerd-28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34.scope - libcontainer container 28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34. Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.276 [INFO][5110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0 goldmane-5bd85449d4- calico-system 71089506-2ab6-49c2-9e3f-bb88117d854f 852 0 2025-06-20 18:24:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-21-135 goldmane-5bd85449d4-5d28l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4af9fff2f85 [] [] }} ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.277 [INFO][5110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.430 [INFO][5148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" HandleID="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Workload="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.431 [INFO][5148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" HandleID="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Workload="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d420), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-135", "pod":"goldmane-5bd85449d4-5d28l", "timestamp":"2025-06-20 18:25:09.430916425 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.431 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.844 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.844 [INFO][5148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.910 [INFO][5148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.940 [INFO][5148] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.975 [INFO][5148] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:09.985 [INFO][5148] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.001 [INFO][5148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.001 [INFO][5148] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.007 [INFO][5148] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.026 [INFO][5148] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.075 [INFO][5148] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.134/26] block=192.168.34.128/26 handle="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.078 [INFO][5148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.134/26] handle="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" host="ip-172-31-21-135" Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.079 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:10.286700 containerd[2019]: 2025-06-20 18:25:10.084 [INFO][5148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.134/26] IPv6=[] ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" HandleID="k8s-pod-network.e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Workload="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" Jun 20 18:25:10.289536 containerd[2019]: 2025-06-20 18:25:10.108 [INFO][5110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"71089506-2ab6-49c2-9e3f-bb88117d854f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"goldmane-5bd85449d4-5d28l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4af9fff2f85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:10.289536 containerd[2019]: 2025-06-20 18:25:10.110 [INFO][5110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.134/32] ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" Jun 20 18:25:10.289536 containerd[2019]: 2025-06-20 18:25:10.110 [INFO][5110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4af9fff2f85 ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" Jun 20 18:25:10.289536 containerd[2019]: 2025-06-20 18:25:10.151 [INFO][5110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" Jun 20 18:25:10.289536 containerd[2019]: 2025-06-20 18:25:10.161 [INFO][5110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"71089506-2ab6-49c2-9e3f-bb88117d854f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc", Pod:"goldmane-5bd85449d4-5d28l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4af9fff2f85", MAC:"32:58:ea:10:e6:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:10.289536 containerd[2019]: 2025-06-20 18:25:10.269 [INFO][5110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" Namespace="calico-system" Pod="goldmane-5bd85449d4-5d28l" WorkloadEndpoint="ip--172--31--21--135-k8s-goldmane--5bd85449d4--5d28l-eth0" Jun 20 18:25:10.435482 containerd[2019]: time="2025-06-20T18:25:10.434577938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bsdqh,Uid:e80ef963-bd5b-47f9-8662-9e335330ec91,Namespace:kube-system,Attempt:0,} returns sandbox id \"80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af\"" Jun 20 18:25:10.444723 containerd[2019]: time="2025-06-20T18:25:10.444638258Z" level=info msg="connecting to shim e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc" address="unix:///run/containerd/s/5be31935d9c131c748e0bec78a5ac8d6ae7b518953bbe842135fe13d1bedd084" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:10.462454 containerd[2019]: time="2025-06-20T18:25:10.462316922Z" level=info msg="CreateContainer within sandbox \"80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:25:10.507870 containerd[2019]: time="2025-06-20T18:25:10.507788522Z" level=info msg="Container 19f0c07802dd97d462fe4e6b27c97fd11f6ae162d921154ebce0d6a7be0321fe: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:10.546495 containerd[2019]: time="2025-06-20T18:25:10.544549070Z" level=info msg="CreateContainer within sandbox \"80d5f9478ba9bdc67469097404f941c4db71906eb2a9d0d1aa9b69de2c4f43af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19f0c07802dd97d462fe4e6b27c97fd11f6ae162d921154ebce0d6a7be0321fe\"" Jun 20 18:25:10.549645 containerd[2019]: time="2025-06-20T18:25:10.549584186Z" level=info msg="StartContainer for \"19f0c07802dd97d462fe4e6b27c97fd11f6ae162d921154ebce0d6a7be0321fe\"" Jun 20 18:25:10.567374 containerd[2019]: time="2025-06-20T18:25:10.566213774Z" level=info msg="connecting to shim 19f0c07802dd97d462fe4e6b27c97fd11f6ae162d921154ebce0d6a7be0321fe" address="unix:///run/containerd/s/0d1de4bc9b038ce9bb964731a3571bd7dcf126066bb91bef234d1a42d466848c" protocol=ttrpc version=3 Jun 20 18:25:10.642842 systemd[1]: Started cri-containerd-e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc.scope - libcontainer container e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc. Jun 20 18:25:10.663631 systemd[1]: Started cri-containerd-19f0c07802dd97d462fe4e6b27c97fd11f6ae162d921154ebce0d6a7be0321fe.scope - libcontainer container 19f0c07802dd97d462fe4e6b27c97fd11f6ae162d921154ebce0d6a7be0321fe. Jun 20 18:25:10.708828 systemd-networkd[1817]: cali9e3d7078c42: Gained IPv6LL Jun 20 18:25:10.841275 containerd[2019]: time="2025-06-20T18:25:10.841106644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-k4vsj,Uid:9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\"" Jun 20 18:25:10.916707 containerd[2019]: time="2025-06-20T18:25:10.916059712Z" level=info msg="StartContainer for \"19f0c07802dd97d462fe4e6b27c97fd11f6ae162d921154ebce0d6a7be0321fe\" returns successfully" Jun 20 18:25:10.964053 containerd[2019]: time="2025-06-20T18:25:10.963609964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4g8ft,Uid:b8b64f5a-c146-4322-982b-92c0687fe966,Namespace:calico-system,Attempt:0,} returns sandbox id \"28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34\"" Jun 20 18:25:11.046549 containerd[2019]: time="2025-06-20T18:25:11.045622825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqdqt,Uid:d73292d8-ab80-4249-9e4c-886ba65bc635,Namespace:kube-system,Attempt:0,}" Jun 20 18:25:11.060412 containerd[2019]: time="2025-06-20T18:25:11.059470861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-9nwnw,Uid:e81c6d23-91be-4c5a-a154-8ce97a0e9ef7,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:25:11.272287 systemd-networkd[1817]: cali59eab8d0847: Link UP Jun 20 18:25:11.283216 systemd-networkd[1817]: cali59eab8d0847: Gained carrier Jun 20 18:25:11.285769 systemd-networkd[1817]: calid38b0c79fe6: Gained IPv6LL Jun 20 18:25:11.286241 systemd-networkd[1817]: cali4af9fff2f85: Gained IPv6LL Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:10.493 [INFO][5266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0 calico-kube-controllers-86696f5875- calico-system e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84 848 0 2025-06-20 18:24:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86696f5875 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-135 calico-kube-controllers-86696f5875-4m9rw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali59eab8d0847 [] [] }} ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:10.495 [INFO][5266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:10.952 [INFO][5354] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" HandleID="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Workload="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:10.964 [INFO][5354] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" HandleID="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Workload="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-135", "pod":"calico-kube-controllers-86696f5875-4m9rw", "timestamp":"2025-06-20 18:25:10.952127944 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:10.965 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:10.965 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:10.965 [INFO][5354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.009 [INFO][5354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.031 [INFO][5354] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.087 [INFO][5354] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.095 [INFO][5354] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.116 [INFO][5354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.146 [INFO][5354] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.160 [INFO][5354] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1 Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.187 [INFO][5354] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.218 [INFO][5354] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.135/26] block=192.168.34.128/26 handle="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.219 [INFO][5354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.135/26] handle="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" host="ip-172-31-21-135" Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.220 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:11.355328 containerd[2019]: 2025-06-20 18:25:11.220 [INFO][5354] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.135/26] IPv6=[] ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" HandleID="k8s-pod-network.6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Workload="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" Jun 20 18:25:11.358532 containerd[2019]: 2025-06-20 18:25:11.237 [INFO][5266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0", GenerateName:"calico-kube-controllers-86696f5875-", Namespace:"calico-system", SelfLink:"", UID:"e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86696f5875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"calico-kube-controllers-86696f5875-4m9rw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali59eab8d0847", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:11.358532 containerd[2019]: 2025-06-20 18:25:11.237 [INFO][5266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.135/32] ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" Jun 20 18:25:11.358532 containerd[2019]: 2025-06-20 18:25:11.237 [INFO][5266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59eab8d0847 ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" Jun 20 18:25:11.358532 containerd[2019]: 2025-06-20 18:25:11.290 [INFO][5266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" Jun 20 18:25:11.358532 containerd[2019]: 2025-06-20 18:25:11.293 [INFO][5266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0", GenerateName:"calico-kube-controllers-86696f5875-", Namespace:"calico-system", SelfLink:"", UID:"e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86696f5875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1", Pod:"calico-kube-controllers-86696f5875-4m9rw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali59eab8d0847", MAC:"ce:2d:a4:4e:e4:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:11.358532 containerd[2019]: 2025-06-20 18:25:11.342 [INFO][5266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" Namespace="calico-system" Pod="calico-kube-controllers-86696f5875-4m9rw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--kube--controllers--86696f5875--4m9rw-eth0" Jun 20 18:25:11.595614 containerd[2019]: time="2025-06-20T18:25:11.594216843Z" level=info msg="connecting to shim 6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1" address="unix:///run/containerd/s/ff01f7834029f56815ae0da38c9a4b3d255ec9055c4e3c45e0271d104cfbe622" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:11.609663 containerd[2019]: time="2025-06-20T18:25:11.609584535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-5d28l,Uid:71089506-2ab6-49c2-9e3f-bb88117d854f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc\"" Jun 20 18:25:11.822794 systemd[1]: Started cri-containerd-6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1.scope - libcontainer container 6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1. Jun 20 18:25:11.860916 systemd-networkd[1817]: cali4029942277d: Gained IPv6LL Jun 20 18:25:12.026277 systemd-networkd[1817]: cali8aecf387b9a: Link UP Jun 20 18:25:12.033838 systemd-networkd[1817]: cali8aecf387b9a: Gained carrier Jun 20 18:25:12.083245 kubelet[3506]: I0620 18:25:12.083147 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bsdqh" podStartSLOduration=51.08312045 podStartE2EDuration="51.08312045s" podCreationTimestamp="2025-06-20 18:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:25:11.545823783 +0000 UTC m=+54.844404909" watchObservedRunningTime="2025-06-20 18:25:12.08312045 +0000 UTC m=+55.381701552" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.436 [INFO][5433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0 calico-apiserver-65fd8789dd- calico-apiserver e81c6d23-91be-4c5a-a154-8ce97a0e9ef7 847 0 2025-06-20 18:24:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65fd8789dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-135 calico-apiserver-65fd8789dd-9nwnw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8aecf387b9a [] [] }} ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.439 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.659 [INFO][5487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.660 [INFO][5487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012d950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-135", "pod":"calico-apiserver-65fd8789dd-9nwnw", "timestamp":"2025-06-20 18:25:11.65990962 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.661 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.661 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.661 [INFO][5487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.738 [INFO][5487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.777 [INFO][5487] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.821 [INFO][5487] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.835 [INFO][5487] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.851 [INFO][5487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.852 [INFO][5487] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.863 [INFO][5487] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.935 [INFO][5487] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.982 [INFO][5487] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.136/26] block=192.168.34.128/26 handle="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.982 [INFO][5487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.136/26] handle="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" host="ip-172-31-21-135" Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.983 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:12.108412 containerd[2019]: 2025-06-20 18:25:11.983 [INFO][5487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.136/26] IPv6=[] ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:12.115687 containerd[2019]: 2025-06-20 18:25:11.997 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0", GenerateName:"calico-apiserver-65fd8789dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd8789dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"calico-apiserver-65fd8789dd-9nwnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aecf387b9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:12.115687 containerd[2019]: 2025-06-20 18:25:11.997 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.136/32] ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:12.115687 containerd[2019]: 2025-06-20 18:25:11.998 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8aecf387b9a ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:12.115687 containerd[2019]: 2025-06-20 18:25:12.034 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:12.115687 containerd[2019]: 2025-06-20 18:25:12.040 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0", GenerateName:"calico-apiserver-65fd8789dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fd8789dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf", Pod:"calico-apiserver-65fd8789dd-9nwnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aecf387b9a", MAC:"b6:82:f3:8d:98:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:12.115687 containerd[2019]: 2025-06-20 18:25:12.085 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Namespace="calico-apiserver" Pod="calico-apiserver-65fd8789dd-9nwnw" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:12.248947 containerd[2019]: time="2025-06-20T18:25:12.248851443Z" level=info msg="connecting to shim edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" address="unix:///run/containerd/s/44040e00445062496e6433f1ce29a5625edcb877fb8b4ed1089fa84e53ac3d4b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:12.307630 systemd-networkd[1817]: cali7ac060c66f3: Link UP Jun 20 18:25:12.316629 systemd-networkd[1817]: cali7ac060c66f3: Gained carrier Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:11.391 [INFO][5443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0 coredns-674b8bbfcf- kube-system d73292d8-ab80-4249-9e4c-886ba65bc635 845 0 2025-06-20 18:24:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-135 coredns-674b8bbfcf-xqdqt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ac060c66f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:11.392 [INFO][5443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:11.855 [INFO][5476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" HandleID="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Workload="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:11.858 [INFO][5476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" HandleID="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Workload="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d680), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-135", "pod":"coredns-674b8bbfcf-xqdqt", "timestamp":"2025-06-20 18:25:11.855620609 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:11.859 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:11.983 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:11.983 [INFO][5476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.044 [INFO][5476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.090 [INFO][5476] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.127 [INFO][5476] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.153 [INFO][5476] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.178 [INFO][5476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.178 [INFO][5476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.183 [INFO][5476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719 Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.218 [INFO][5476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.245 [INFO][5476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.137/26] block=192.168.34.128/26 handle="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.246 [INFO][5476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.137/26] handle="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" host="ip-172-31-21-135" Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.246 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:12.408399 containerd[2019]: 2025-06-20 18:25:12.246 [INFO][5476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.137/26] IPv6=[] ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" HandleID="k8s-pod-network.d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Workload="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" Jun 20 18:25:12.412751 containerd[2019]: 2025-06-20 18:25:12.279 [INFO][5443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d73292d8-ab80-4249-9e4c-886ba65bc635", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"coredns-674b8bbfcf-xqdqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ac060c66f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:12.412751 containerd[2019]: 2025-06-20 18:25:12.281 [INFO][5443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.137/32] ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" Jun 20 18:25:12.412751 containerd[2019]: 2025-06-20 18:25:12.282 [INFO][5443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ac060c66f3 ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" Jun 20 18:25:12.412751 containerd[2019]: 2025-06-20 18:25:12.321 [INFO][5443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" Jun 20 18:25:12.412751 containerd[2019]: 2025-06-20 18:25:12.325 [INFO][5443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d73292d8-ab80-4249-9e4c-886ba65bc635", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 24, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719", Pod:"coredns-674b8bbfcf-xqdqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ac060c66f3", MAC:"02:6a:6d:96:7c:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:12.412751 containerd[2019]: 2025-06-20 18:25:12.375 [INFO][5443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqdqt" WorkloadEndpoint="ip--172--31--21--135-k8s-coredns--674b8bbfcf--xqdqt-eth0" Jun 20 18:25:12.436180 systemd[1]: Started cri-containerd-edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf.scope - libcontainer container edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf. Jun 20 18:25:12.514518 containerd[2019]: time="2025-06-20T18:25:12.513496192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86696f5875-4m9rw,Uid:e2c00cb4-bcb0-4fe2-b1e0-1274cd455b84,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1\"" Jun 20 18:25:12.586439 containerd[2019]: time="2025-06-20T18:25:12.586064248Z" level=info msg="connecting to shim d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719" address="unix:///run/containerd/s/473d362cc51403138607e0d56cd1295817931b953a43607643198e03d77ad90b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:12.701705 systemd[1]: Started cri-containerd-d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719.scope - libcontainer container d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719. Jun 20 18:25:12.919288 containerd[2019]: time="2025-06-20T18:25:12.918518382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqdqt,Uid:d73292d8-ab80-4249-9e4c-886ba65bc635,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719\"" Jun 20 18:25:12.938802 containerd[2019]: time="2025-06-20T18:25:12.938654070Z" level=info msg="CreateContainer within sandbox \"d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:25:13.003408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224521357.mount: Deactivated successfully. Jun 20 18:25:13.005518 containerd[2019]: time="2025-06-20T18:25:13.005240582Z" level=info msg="Container 278d18c29280bf7a18f82e1a18487de3fbc9d47ff961e0de7de3553622a87652: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:13.030785 containerd[2019]: time="2025-06-20T18:25:13.030710979Z" level=info msg="CreateContainer within sandbox \"d6dc17521d9344c4061e7880521a3c711e160a28e8772e2bb8efc3e6f2048719\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"278d18c29280bf7a18f82e1a18487de3fbc9d47ff961e0de7de3553622a87652\"" Jun 20 18:25:13.034842 containerd[2019]: time="2025-06-20T18:25:13.034764759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fd8789dd-9nwnw,Uid:e81c6d23-91be-4c5a-a154-8ce97a0e9ef7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\"" Jun 20 18:25:13.035431 containerd[2019]: time="2025-06-20T18:25:13.035154423Z" level=info msg="StartContainer for \"278d18c29280bf7a18f82e1a18487de3fbc9d47ff961e0de7de3553622a87652\"" Jun 20 18:25:13.042674 containerd[2019]: time="2025-06-20T18:25:13.042618075Z" level=info msg="connecting to shim 278d18c29280bf7a18f82e1a18487de3fbc9d47ff961e0de7de3553622a87652" address="unix:///run/containerd/s/473d362cc51403138607e0d56cd1295817931b953a43607643198e03d77ad90b" protocol=ttrpc version=3 Jun 20 18:25:13.106893 systemd[1]: Started cri-containerd-278d18c29280bf7a18f82e1a18487de3fbc9d47ff961e0de7de3553622a87652.scope - libcontainer container 278d18c29280bf7a18f82e1a18487de3fbc9d47ff961e0de7de3553622a87652. Jun 20 18:25:13.211599 containerd[2019]: time="2025-06-20T18:25:13.211481511Z" level=info msg="StartContainer for \"278d18c29280bf7a18f82e1a18487de3fbc9d47ff961e0de7de3553622a87652\" returns successfully" Jun 20 18:25:13.269879 systemd-networkd[1817]: cali59eab8d0847: Gained IPv6LL Jun 20 18:25:13.668203 kubelet[3506]: I0620 18:25:13.667927 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xqdqt" podStartSLOduration=52.667901502 podStartE2EDuration="52.667901502s" podCreationTimestamp="2025-06-20 18:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:25:13.622732529 +0000 UTC m=+56.921313643" watchObservedRunningTime="2025-06-20 18:25:13.667901502 +0000 UTC m=+56.966482604" Jun 20 18:25:13.908586 systemd-networkd[1817]: cali8aecf387b9a: Gained IPv6LL Jun 20 18:25:13.973552 systemd-networkd[1817]: cali7ac060c66f3: Gained IPv6LL Jun 20 18:25:14.174842 containerd[2019]: time="2025-06-20T18:25:14.174760156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:14.177745 containerd[2019]: time="2025-06-20T18:25:14.177621712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=44514850" Jun 20 18:25:14.180629 containerd[2019]: time="2025-06-20T18:25:14.180559492Z" level=info msg="ImageCreate event name:\"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:14.186164 containerd[2019]: time="2025-06-20T18:25:14.186091564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:14.189539 containerd[2019]: time="2025-06-20T18:25:14.189465244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 5.586931527s" Jun 20 18:25:14.189539 containerd[2019]: time="2025-06-20T18:25:14.189531988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 18:25:14.192676 containerd[2019]: time="2025-06-20T18:25:14.192604912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 18:25:14.202439 containerd[2019]: time="2025-06-20T18:25:14.202322428Z" level=info msg="CreateContainer within sandbox \"2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:25:14.221209 containerd[2019]: time="2025-06-20T18:25:14.220862404Z" level=info msg="Container 77d70c35bd902dd3e8de9db8ce3cb23a03673d6bc62186b895978f08742cea93: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:14.250780 containerd[2019]: time="2025-06-20T18:25:14.250585925Z" level=info msg="CreateContainer within sandbox \"2fe9a3753c8f31f55b9448cbc2b432f1344cabdaefd58313b7b378698ccee7a7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"77d70c35bd902dd3e8de9db8ce3cb23a03673d6bc62186b895978f08742cea93\"" Jun 20 18:25:14.252829 containerd[2019]: time="2025-06-20T18:25:14.252748901Z" level=info msg="StartContainer for \"77d70c35bd902dd3e8de9db8ce3cb23a03673d6bc62186b895978f08742cea93\"" Jun 20 18:25:14.259828 containerd[2019]: time="2025-06-20T18:25:14.259695857Z" level=info msg="connecting to shim 77d70c35bd902dd3e8de9db8ce3cb23a03673d6bc62186b895978f08742cea93" address="unix:///run/containerd/s/e93dbb227e6666dd82a83e00d8454877c868c0771fed2b55f7961af451d8106b" protocol=ttrpc version=3 Jun 20 18:25:14.313920 systemd[1]: Started cri-containerd-77d70c35bd902dd3e8de9db8ce3cb23a03673d6bc62186b895978f08742cea93.scope - libcontainer container 77d70c35bd902dd3e8de9db8ce3cb23a03673d6bc62186b895978f08742cea93. Jun 20 18:25:14.496915 containerd[2019]: time="2025-06-20T18:25:14.496127802Z" level=info msg="StartContainer for \"77d70c35bd902dd3e8de9db8ce3cb23a03673d6bc62186b895978f08742cea93\" returns successfully" Jun 20 18:25:14.539511 containerd[2019]: time="2025-06-20T18:25:14.539421726Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:14.541675 containerd[2019]: time="2025-06-20T18:25:14.541601298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 18:25:14.547898 containerd[2019]: time="2025-06-20T18:25:14.547800978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 355.13429ms" Jun 20 18:25:14.547898 containerd[2019]: time="2025-06-20T18:25:14.547890426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 18:25:14.550303 containerd[2019]: time="2025-06-20T18:25:14.550239798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 18:25:14.561203 containerd[2019]: time="2025-06-20T18:25:14.561115170Z" level=info msg="CreateContainer within sandbox \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:25:14.583367 containerd[2019]: time="2025-06-20T18:25:14.582161970Z" level=info msg="Container 923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:14.617493 containerd[2019]: time="2025-06-20T18:25:14.617276034Z" level=info msg="CreateContainer within sandbox \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\"" Jun 20 18:25:14.620674 containerd[2019]: time="2025-06-20T18:25:14.620617638Z" level=info msg="StartContainer for \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\"" Jun 20 18:25:14.626233 containerd[2019]: time="2025-06-20T18:25:14.626164698Z" level=info msg="connecting to shim 923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87" address="unix:///run/containerd/s/cb77e093241493d4cdd3b0dd5e45b16bc08108496889f427900334a2652065af" protocol=ttrpc version=3 Jun 20 18:25:14.713945 systemd[1]: Started cri-containerd-923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87.scope - libcontainer container 923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87. Jun 20 18:25:14.821840 containerd[2019]: time="2025-06-20T18:25:14.821590255Z" level=info msg="StartContainer for \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" returns successfully" Jun 20 18:25:15.618919 kubelet[3506]: I0620 18:25:15.617993 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:25:15.651405 kubelet[3506]: I0620 18:25:15.649496 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-599868b7d5-p9dk8" podStartSLOduration=31.446649541 podStartE2EDuration="37.649469648s" podCreationTimestamp="2025-06-20 18:24:38 +0000 UTC" firstStartedPulling="2025-06-20 18:25:07.988600489 +0000 UTC m=+51.287181603" lastFinishedPulling="2025-06-20 18:25:14.19142062 +0000 UTC m=+57.490001710" observedRunningTime="2025-06-20 18:25:14.649221139 +0000 UTC m=+57.947802265" watchObservedRunningTime="2025-06-20 18:25:15.649469648 +0000 UTC m=+58.948050786" Jun 20 18:25:15.651405 kubelet[3506]: I0620 18:25:15.649802 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65fd8789dd-k4vsj" podStartSLOduration=36.950257662 podStartE2EDuration="40.649787984s" podCreationTimestamp="2025-06-20 18:24:35 +0000 UTC" firstStartedPulling="2025-06-20 18:25:10.850260376 +0000 UTC m=+54.148841478" lastFinishedPulling="2025-06-20 18:25:14.549790698 +0000 UTC m=+57.848371800" observedRunningTime="2025-06-20 18:25:15.643895456 +0000 UTC m=+58.942476570" watchObservedRunningTime="2025-06-20 18:25:15.649787984 +0000 UTC m=+58.948369110" Jun 20 18:25:16.065133 containerd[2019]: time="2025-06-20T18:25:16.065076594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:16.072369 containerd[2019]: time="2025-06-20T18:25:16.069417018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8226240" Jun 20 18:25:16.073023 containerd[2019]: time="2025-06-20T18:25:16.072942090Z" level=info msg="ImageCreate event name:\"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:16.080921 containerd[2019]: time="2025-06-20T18:25:16.080839794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:16.087162 containerd[2019]: time="2025-06-20T18:25:16.087068286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"9595481\" in 1.536756488s" Jun 20 18:25:16.087162 containerd[2019]: time="2025-06-20T18:25:16.087155922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\"" Jun 20 18:25:16.094436 containerd[2019]: time="2025-06-20T18:25:16.094048362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 18:25:16.103561 containerd[2019]: time="2025-06-20T18:25:16.103479582Z" level=info msg="CreateContainer within sandbox \"28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 18:25:16.133266 containerd[2019]: time="2025-06-20T18:25:16.132817290Z" level=info msg="Container 9b6d0dfb9f347df7d078f2dd3db8530517e0edd05a09f660d9495bfc01dcc528: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:16.161774 containerd[2019]: time="2025-06-20T18:25:16.161697786Z" level=info msg="CreateContainer within sandbox \"28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b6d0dfb9f347df7d078f2dd3db8530517e0edd05a09f660d9495bfc01dcc528\"" Jun 20 18:25:16.165451 containerd[2019]: time="2025-06-20T18:25:16.163466478Z" level=info msg="StartContainer for \"9b6d0dfb9f347df7d078f2dd3db8530517e0edd05a09f660d9495bfc01dcc528\"" Jun 20 18:25:16.171821 containerd[2019]: time="2025-06-20T18:25:16.171300858Z" level=info msg="connecting to shim 9b6d0dfb9f347df7d078f2dd3db8530517e0edd05a09f660d9495bfc01dcc528" address="unix:///run/containerd/s/84c3c38bfaabb913596eb455b93f8ea4bcf734a45bdf1fae9069738e3cd517b3" protocol=ttrpc version=3 Jun 20 18:25:16.244581 systemd[1]: Started cri-containerd-9b6d0dfb9f347df7d078f2dd3db8530517e0edd05a09f660d9495bfc01dcc528.scope - libcontainer container 9b6d0dfb9f347df7d078f2dd3db8530517e0edd05a09f660d9495bfc01dcc528. Jun 20 18:25:16.368011 ntpd[1981]: Listen normally on 8 vxlan.calico 192.168.34.128:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 8 vxlan.calico 192.168.34.128:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 9 calife2f74cc8ae [fe80::ecee:eeff:feee:eeee%4]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 10 vxlan.calico [fe80::64ff:e7ff:fe19:9ee0%5]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 11 cali57745178ecf [fe80::ecee:eeff:feee:eeee%8]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 12 cali9e3d7078c42 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 13 calid38b0c79fe6 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 14 cali4029942277d [fe80::ecee:eeff:feee:eeee%11]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 15 cali4af9fff2f85 [fe80::ecee:eeff:feee:eeee%12]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 16 cali59eab8d0847 [fe80::ecee:eeff:feee:eeee%13]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 17 cali8aecf387b9a [fe80::ecee:eeff:feee:eeee%14]:123 Jun 20 18:25:16.370111 ntpd[1981]: 20 Jun 18:25:16 ntpd[1981]: Listen normally on 18 cali7ac060c66f3 [fe80::ecee:eeff:feee:eeee%15]:123 Jun 20 18:25:16.368272 ntpd[1981]: Listen normally on 9 calife2f74cc8ae [fe80::ecee:eeff:feee:eeee%4]:123 Jun 20 18:25:16.368568 ntpd[1981]: Listen normally on 10 vxlan.calico [fe80::64ff:e7ff:fe19:9ee0%5]:123 Jun 20 18:25:16.368658 ntpd[1981]: Listen normally on 11 cali57745178ecf [fe80::ecee:eeff:feee:eeee%8]:123 Jun 20 18:25:16.368738 ntpd[1981]: Listen normally on 12 cali9e3d7078c42 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 20 18:25:16.368809 ntpd[1981]: Listen normally on 13 calid38b0c79fe6 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 20 18:25:16.368874 ntpd[1981]: Listen normally on 14 cali4029942277d [fe80::ecee:eeff:feee:eeee%11]:123 Jun 20 18:25:16.368941 ntpd[1981]: Listen normally on 15 cali4af9fff2f85 [fe80::ecee:eeff:feee:eeee%12]:123 Jun 20 18:25:16.369004 ntpd[1981]: Listen normally on 16 cali59eab8d0847 [fe80::ecee:eeff:feee:eeee%13]:123 Jun 20 18:25:16.369095 ntpd[1981]: Listen normally on 17 cali8aecf387b9a [fe80::ecee:eeff:feee:eeee%14]:123 Jun 20 18:25:16.369163 ntpd[1981]: Listen normally on 18 cali7ac060c66f3 [fe80::ecee:eeff:feee:eeee%15]:123 Jun 20 18:25:16.495704 containerd[2019]: time="2025-06-20T18:25:16.495219620Z" level=info msg="StartContainer for \"9b6d0dfb9f347df7d078f2dd3db8530517e0edd05a09f660d9495bfc01dcc528\" returns successfully" Jun 20 18:25:17.636724 kubelet[3506]: I0620 18:25:17.635885 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:25:20.047170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123105786.mount: Deactivated successfully. Jun 20 18:25:21.241544 containerd[2019]: time="2025-06-20T18:25:21.241474187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:21.246173 containerd[2019]: time="2025-06-20T18:25:21.246032627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=61832718" Jun 20 18:25:21.249126 containerd[2019]: time="2025-06-20T18:25:21.249017867Z" level=info msg="ImageCreate event name:\"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:21.259589 containerd[2019]: time="2025-06-20T18:25:21.259286051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:21.264074 containerd[2019]: time="2025-06-20T18:25:21.263982467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"61832564\" in 5.169342277s" Jun 20 18:25:21.264074 containerd[2019]: time="2025-06-20T18:25:21.264045143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\"" Jun 20 18:25:21.268730 containerd[2019]: time="2025-06-20T18:25:21.268666655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 18:25:21.281332 containerd[2019]: time="2025-06-20T18:25:21.281265624Z" level=info msg="CreateContainer within sandbox \"e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 18:25:21.313927 containerd[2019]: time="2025-06-20T18:25:21.313845012Z" level=info msg="Container 6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:21.354888 containerd[2019]: time="2025-06-20T18:25:21.354823560Z" level=info msg="CreateContainer within sandbox \"e1b0f97d4596039a3b5ffbc15fabdeea4c79722282a918d01b1efe5ec10920bc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\"" Jun 20 18:25:21.356499 containerd[2019]: time="2025-06-20T18:25:21.356423892Z" level=info msg="StartContainer for \"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\"" Jun 20 18:25:21.360669 containerd[2019]: time="2025-06-20T18:25:21.360575952Z" level=info msg="connecting to shim 6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09" address="unix:///run/containerd/s/5be31935d9c131c748e0bec78a5ac8d6ae7b518953bbe842135fe13d1bedd084" protocol=ttrpc version=3 Jun 20 18:25:21.419696 systemd[1]: Started cri-containerd-6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09.scope - libcontainer container 6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09. Jun 20 18:25:21.786890 containerd[2019]: time="2025-06-20T18:25:21.786812810Z" level=info msg="StartContainer for \"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" returns successfully" Jun 20 18:25:22.406716 systemd[1]: Started sshd@9-172.31.21.135:22-139.178.68.195:46506.service - OpenSSH per-connection server daemon (139.178.68.195:46506). Jun 20 18:25:22.486585 kubelet[3506]: I0620 18:25:22.482192 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:25:22.685469 sshd[5874]: Accepted publickey for core from 139.178.68.195 port 46506 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:22.694668 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:22.724755 systemd-logind[1989]: New session 10 of user core. Jun 20 18:25:22.731656 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:25:22.930738 kubelet[3506]: I0620 18:25:22.930090 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-5d28l" podStartSLOduration=28.280225157 podStartE2EDuration="37.930065848s" podCreationTimestamp="2025-06-20 18:24:45 +0000 UTC" firstStartedPulling="2025-06-20 18:25:11.617164732 +0000 UTC m=+54.915745834" lastFinishedPulling="2025-06-20 18:25:21.267005435 +0000 UTC m=+64.565586525" observedRunningTime="2025-06-20 18:25:22.786501639 +0000 UTC m=+66.085082837" watchObservedRunningTime="2025-06-20 18:25:22.930065848 +0000 UTC m=+66.228646938" Jun 20 18:25:23.090993 systemd[1]: Created slice kubepods-besteffort-pod13165b96_4ed5_4c1a_ace8_5eca5421face.slice - libcontainer container kubepods-besteffort-pod13165b96_4ed5_4c1a_ace8_5eca5421face.slice. Jun 20 18:25:23.168171 kubelet[3506]: I0620 18:25:23.167847 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z72g\" (UniqueName: \"kubernetes.io/projected/13165b96-4ed5-4c1a-ace8-5eca5421face-kube-api-access-2z72g\") pod \"calico-apiserver-599868b7d5-7znmz\" (UID: \"13165b96-4ed5-4c1a-ace8-5eca5421face\") " pod="calico-apiserver/calico-apiserver-599868b7d5-7znmz" Jun 20 18:25:23.168443 kubelet[3506]: I0620 18:25:23.168312 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13165b96-4ed5-4c1a-ace8-5eca5421face-calico-apiserver-certs\") pod \"calico-apiserver-599868b7d5-7znmz\" (UID: \"13165b96-4ed5-4c1a-ace8-5eca5421face\") " pod="calico-apiserver/calico-apiserver-599868b7d5-7znmz" Jun 20 18:25:23.297437 sshd[5878]: Connection closed by 139.178.68.195 port 46506 Jun 20 18:25:23.301629 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:23.325039 systemd[1]: sshd@9-172.31.21.135:22-139.178.68.195:46506.service: Deactivated successfully. Jun 20 18:25:23.356321 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:25:23.372376 systemd-logind[1989]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:25:23.379067 systemd-logind[1989]: Removed session 10. Jun 20 18:25:23.419673 containerd[2019]: time="2025-06-20T18:25:23.419606402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599868b7d5-7znmz,Uid:13165b96-4ed5-4c1a-ace8-5eca5421face,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:25:23.532765 containerd[2019]: time="2025-06-20T18:25:23.532440543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" id:\"299758142a5f5a97ba7bc94cd4f604a537ec248af6de2137691cc3dbee80a009\" pid:5900 exit_status:1 exited_at:{seconds:1750443923 nanos:530898015}" Jun 20 18:25:23.810053 systemd-networkd[1817]: cali799ef61c771: Link UP Jun 20 18:25:23.814865 systemd-networkd[1817]: cali799ef61c771: Gained carrier Jun 20 18:25:23.816680 (udev-worker)[5966]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.570 [INFO][5925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0 calico-apiserver-599868b7d5- calico-apiserver 13165b96-4ed5-4c1a-ace8-5eca5421face 1127 0 2025-06-20 18:25:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599868b7d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-135 calico-apiserver-599868b7d5-7znmz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali799ef61c771 [] [] }} ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.571 [INFO][5925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.642 [INFO][5939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" HandleID="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Workload="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.642 [INFO][5939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" HandleID="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Workload="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-135", "pod":"calico-apiserver-599868b7d5-7znmz", "timestamp":"2025-06-20 18:25:23.642116811 +0000 UTC"}, Hostname:"ip-172-31-21-135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.642 [INFO][5939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.642 [INFO][5939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.642 [INFO][5939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-135' Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.666 [INFO][5939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.684 [INFO][5939] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.713 [INFO][5939] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.718 [INFO][5939] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.729 [INFO][5939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.730 [INFO][5939] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.737 [INFO][5939] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3 Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.755 [INFO][5939] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.787 [INFO][5939] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.138/26] block=192.168.34.128/26 handle="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.788 [INFO][5939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.138/26] handle="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" host="ip-172-31-21-135" Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.788 [INFO][5939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:23.862692 containerd[2019]: 2025-06-20 18:25:23.788 [INFO][5939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.138/26] IPv6=[] ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" HandleID="k8s-pod-network.66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Workload="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" Jun 20 18:25:23.865270 containerd[2019]: 2025-06-20 18:25:23.793 [INFO][5925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0", GenerateName:"calico-apiserver-599868b7d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"13165b96-4ed5-4c1a-ace8-5eca5421face", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599868b7d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"", Pod:"calico-apiserver-599868b7d5-7znmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali799ef61c771", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:23.865270 containerd[2019]: 2025-06-20 18:25:23.794 [INFO][5925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.138/32] ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" Jun 20 18:25:23.865270 containerd[2019]: 2025-06-20 18:25:23.794 [INFO][5925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali799ef61c771 ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" Jun 20 18:25:23.865270 containerd[2019]: 2025-06-20 18:25:23.822 [INFO][5925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" Jun 20 18:25:23.865270 containerd[2019]: 2025-06-20 18:25:23.824 [INFO][5925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0", GenerateName:"calico-apiserver-599868b7d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"13165b96-4ed5-4c1a-ace8-5eca5421face", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599868b7d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-135", ContainerID:"66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3", Pod:"calico-apiserver-599868b7d5-7znmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali799ef61c771", MAC:"ba:24:e2:c4:59:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:25:23.865270 containerd[2019]: 2025-06-20 18:25:23.854 [INFO][5925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" Namespace="calico-apiserver" Pod="calico-apiserver-599868b7d5-7znmz" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--599868b7d5--7znmz-eth0" Jun 20 18:25:23.955086 containerd[2019]: time="2025-06-20T18:25:23.954760169Z" level=info msg="connecting to shim 66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3" address="unix:///run/containerd/s/abb6e7823835d7e0f290fc7491271e76d81fc20a8289c319457d781650d8fce9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:25:24.066747 systemd[1]: Started cri-containerd-66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3.scope - libcontainer container 66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3. Jun 20 18:25:24.094361 containerd[2019]: time="2025-06-20T18:25:24.094146877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" id:\"c4fefa5baf939491f314ca8f3fd20d19a16d83dc90ec1f8782935db6b8a47b33\" pid:5959 exit_status:1 exited_at:{seconds:1750443924 nanos:91845325}" Jun 20 18:25:24.243058 containerd[2019]: time="2025-06-20T18:25:24.242994554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599868b7d5-7znmz,Uid:13165b96-4ed5-4c1a-ace8-5eca5421face,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3\"" Jun 20 18:25:24.256750 containerd[2019]: time="2025-06-20T18:25:24.256692470Z" level=info msg="CreateContainer within sandbox \"66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:25:24.276156 containerd[2019]: time="2025-06-20T18:25:24.276086294Z" level=info msg="Container a32bb88c300da580d0f64ea3e127e8ec4632bbdf55de1b9239aaf2225dbfe5fc: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:24.293391 containerd[2019]: time="2025-06-20T18:25:24.292417814Z" level=info msg="CreateContainer within sandbox \"66632773a3a30cd89673aed3383647882245b0823c76c66755c1b666996b2ca3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a32bb88c300da580d0f64ea3e127e8ec4632bbdf55de1b9239aaf2225dbfe5fc\"" Jun 20 18:25:24.295467 containerd[2019]: time="2025-06-20T18:25:24.294520382Z" level=info msg="StartContainer for \"a32bb88c300da580d0f64ea3e127e8ec4632bbdf55de1b9239aaf2225dbfe5fc\"" Jun 20 18:25:24.301960 containerd[2019]: time="2025-06-20T18:25:24.300305091Z" level=info msg="connecting to shim a32bb88c300da580d0f64ea3e127e8ec4632bbdf55de1b9239aaf2225dbfe5fc" address="unix:///run/containerd/s/abb6e7823835d7e0f290fc7491271e76d81fc20a8289c319457d781650d8fce9" protocol=ttrpc version=3 Jun 20 18:25:24.359859 systemd[1]: Started cri-containerd-a32bb88c300da580d0f64ea3e127e8ec4632bbdf55de1b9239aaf2225dbfe5fc.scope - libcontainer container a32bb88c300da580d0f64ea3e127e8ec4632bbdf55de1b9239aaf2225dbfe5fc. Jun 20 18:25:24.490892 containerd[2019]: time="2025-06-20T18:25:24.490581471Z" level=info msg="StartContainer for \"a32bb88c300da580d0f64ea3e127e8ec4632bbdf55de1b9239aaf2225dbfe5fc\" returns successfully" Jun 20 18:25:24.732533 kubelet[3506]: I0620 18:25:24.731956 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-599868b7d5-7znmz" podStartSLOduration=1.731598545 podStartE2EDuration="1.731598545s" podCreationTimestamp="2025-06-20 18:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:25:24.727845245 +0000 UTC m=+68.026426431" watchObservedRunningTime="2025-06-20 18:25:24.731598545 +0000 UTC m=+68.030179647" Jun 20 18:25:25.440444 containerd[2019]: time="2025-06-20T18:25:25.440374348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:25.442358 containerd[2019]: time="2025-06-20T18:25:25.442263184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=48129475" Jun 20 18:25:25.445370 containerd[2019]: time="2025-06-20T18:25:25.444960016Z" level=info msg="ImageCreate event name:\"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:25.449715 containerd[2019]: time="2025-06-20T18:25:25.449662108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:25.451297 containerd[2019]: time="2025-06-20T18:25:25.450799300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"49498684\" in 4.182065865s" Jun 20 18:25:25.451297 containerd[2019]: time="2025-06-20T18:25:25.450852724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\"" Jun 20 18:25:25.454735 containerd[2019]: time="2025-06-20T18:25:25.454397248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 18:25:25.496080 containerd[2019]: time="2025-06-20T18:25:25.495138568Z" level=info msg="CreateContainer within sandbox \"6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 18:25:25.515824 containerd[2019]: time="2025-06-20T18:25:25.515769365Z" level=info msg="Container 8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:25.556709 systemd-networkd[1817]: cali799ef61c771: Gained IPv6LL Jun 20 18:25:25.579399 containerd[2019]: time="2025-06-20T18:25:25.579320729Z" level=info msg="CreateContainer within sandbox \"6fb1fc41d5320d0cabfc76b8e47fca1e190237b18442bff81f597e44ad1a3ec1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\"" Jun 20 18:25:25.582477 containerd[2019]: time="2025-06-20T18:25:25.580486277Z" level=info msg="StartContainer for \"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\"" Jun 20 18:25:25.585137 containerd[2019]: time="2025-06-20T18:25:25.584633069Z" level=info msg="connecting to shim 8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690" address="unix:///run/containerd/s/ff01f7834029f56815ae0da38c9a4b3d255ec9055c4e3c45e0271d104cfbe622" protocol=ttrpc version=3 Jun 20 18:25:25.628993 systemd[1]: Started cri-containerd-8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690.scope - libcontainer container 8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690. Jun 20 18:25:25.740100 containerd[2019]: time="2025-06-20T18:25:25.739779882Z" level=info msg="StartContainer for \"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\" returns successfully" Jun 20 18:25:25.798825 containerd[2019]: time="2025-06-20T18:25:25.798751566Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:25.804911 containerd[2019]: time="2025-06-20T18:25:25.802382646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 18:25:25.809916 containerd[2019]: time="2025-06-20T18:25:25.809826330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 355.367138ms" Jun 20 18:25:25.809916 containerd[2019]: time="2025-06-20T18:25:25.809893686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 18:25:25.812112 containerd[2019]: time="2025-06-20T18:25:25.811916310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 18:25:25.822126 containerd[2019]: time="2025-06-20T18:25:25.821789598Z" level=info msg="CreateContainer within sandbox \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:25:25.841658 containerd[2019]: time="2025-06-20T18:25:25.841488678Z" level=info msg="Container 786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:25.877243 containerd[2019]: time="2025-06-20T18:25:25.877128678Z" level=info msg="CreateContainer within sandbox \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\"" Jun 20 18:25:25.880372 containerd[2019]: time="2025-06-20T18:25:25.880040982Z" level=info msg="StartContainer for \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\"" Jun 20 18:25:25.885762 containerd[2019]: time="2025-06-20T18:25:25.885625842Z" level=info msg="connecting to shim 786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196" address="unix:///run/containerd/s/44040e00445062496e6433f1ce29a5625edcb877fb8b4ed1089fa84e53ac3d4b" protocol=ttrpc version=3 Jun 20 18:25:26.003529 systemd[1]: Started cri-containerd-786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196.scope - libcontainer container 786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196. Jun 20 18:25:26.305333 containerd[2019]: time="2025-06-20T18:25:26.305271124Z" level=info msg="StartContainer for \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" returns successfully" Jun 20 18:25:26.721798 containerd[2019]: time="2025-06-20T18:25:26.721662547Z" level=info msg="StopContainer for \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" with timeout 30 (s)" Jun 20 18:25:26.725764 containerd[2019]: time="2025-06-20T18:25:26.725701075Z" level=info msg="Stop container \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" with signal terminated" Jun 20 18:25:26.737231 kubelet[3506]: I0620 18:25:26.735691 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:25:26.811388 kubelet[3506]: I0620 18:25:26.810804 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65fd8789dd-9nwnw" podStartSLOduration=39.038707684 podStartE2EDuration="51.810779935s" podCreationTimestamp="2025-06-20 18:24:35 +0000 UTC" firstStartedPulling="2025-06-20 18:25:13.039296859 +0000 UTC m=+56.337877961" lastFinishedPulling="2025-06-20 18:25:25.811369038 +0000 UTC m=+69.109950212" observedRunningTime="2025-06-20 18:25:26.764563195 +0000 UTC m=+70.063144309" watchObservedRunningTime="2025-06-20 18:25:26.810779935 +0000 UTC m=+70.109361061" Jun 20 18:25:26.814185 kubelet[3506]: I0620 18:25:26.812741 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86696f5875-4m9rw" podStartSLOduration=29.895358467 podStartE2EDuration="42.812719747s" podCreationTimestamp="2025-06-20 18:24:44 +0000 UTC" firstStartedPulling="2025-06-20 18:25:12.535910608 +0000 UTC m=+55.834491710" lastFinishedPulling="2025-06-20 18:25:25.4532719 +0000 UTC m=+68.751852990" observedRunningTime="2025-06-20 18:25:26.808913707 +0000 UTC m=+70.107494821" watchObservedRunningTime="2025-06-20 18:25:26.812719747 +0000 UTC m=+70.111300849" Jun 20 18:25:26.836864 systemd[1]: cri-containerd-786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196.scope: Deactivated successfully. Jun 20 18:25:26.851449 containerd[2019]: time="2025-06-20T18:25:26.851204095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" id:\"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" pid:6120 exit_status:1 exited_at:{seconds:1750443926 nanos:845688559}" Jun 20 18:25:26.904371 containerd[2019]: time="2025-06-20T18:25:26.903180271Z" level=info msg="received exit event container_id:\"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" id:\"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" pid:6120 exit_status:1 exited_at:{seconds:1750443926 nanos:845688559}" Jun 20 18:25:27.012774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196-rootfs.mount: Deactivated successfully. Jun 20 18:25:27.083365 containerd[2019]: time="2025-06-20T18:25:27.083245072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\" id:\"9c99f22a0683d5a8052860f16235e4223078f42d9d2c97fc572047519f10de2c\" pid:6166 exited_at:{seconds:1750443927 nanos:82681528}" Jun 20 18:25:27.382047 containerd[2019]: time="2025-06-20T18:25:27.381995070Z" level=info msg="StopContainer for \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" returns successfully" Jun 20 18:25:27.474125 containerd[2019]: time="2025-06-20T18:25:27.473686182Z" level=info msg="StopPodSandbox for \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\"" Jun 20 18:25:27.475030 containerd[2019]: time="2025-06-20T18:25:27.474597990Z" level=info msg="Container to stop \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:27.512809 systemd[1]: cri-containerd-edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf.scope: Deactivated successfully. Jun 20 18:25:27.524619 containerd[2019]: time="2025-06-20T18:25:27.524536543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" id:\"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" pid:5591 exit_status:137 exited_at:{seconds:1750443927 nanos:520810195}" Jun 20 18:25:27.666150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf-rootfs.mount: Deactivated successfully. Jun 20 18:25:27.694467 containerd[2019]: time="2025-06-20T18:25:27.694320883Z" level=info msg="shim disconnected" id=edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf namespace=k8s.io Jun 20 18:25:27.694467 containerd[2019]: time="2025-06-20T18:25:27.694406323Z" level=warning msg="cleaning up after shim disconnected" id=edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf namespace=k8s.io Jun 20 18:25:27.694467 containerd[2019]: time="2025-06-20T18:25:27.694466467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:25:27.875091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf-shm.mount: Deactivated successfully. Jun 20 18:25:27.991364 containerd[2019]: time="2025-06-20T18:25:27.990942501Z" level=info msg="received exit event sandbox_id:\"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" exit_status:137 exited_at:{seconds:1750443927 nanos:520810195}" Jun 20 18:25:28.197458 systemd-networkd[1817]: cali8aecf387b9a: Link DOWN Jun 20 18:25:28.197993 systemd-networkd[1817]: cali8aecf387b9a: Lost carrier Jun 20 18:25:28.344523 systemd[1]: Started sshd@10-172.31.21.135:22-139.178.68.195:35346.service - OpenSSH per-connection server daemon (139.178.68.195:35346). Jun 20 18:25:28.398570 containerd[2019]: time="2025-06-20T18:25:28.398499871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:28.406378 containerd[2019]: time="2025-06-20T18:25:28.405129763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=13749925" Jun 20 18:25:28.406716 containerd[2019]: time="2025-06-20T18:25:28.406652203Z" level=info msg="ImageCreate event name:\"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:28.438010 containerd[2019]: time="2025-06-20T18:25:28.437937271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:25:28.441014 containerd[2019]: time="2025-06-20T18:25:28.440936827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"15119118\" in 2.628956917s" Jun 20 18:25:28.441322 containerd[2019]: time="2025-06-20T18:25:28.441290803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\"" Jun 20 18:25:28.453980 containerd[2019]: time="2025-06-20T18:25:28.453930139Z" level=info msg="CreateContainer within sandbox \"28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 18:25:28.540953 containerd[2019]: time="2025-06-20T18:25:28.540780248Z" level=info msg="Container 72139cdbdc0beb87c9328ddddaf0802e112de04100b1f8f2252b16cfe44af1fe: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:25:28.548775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088324447.mount: Deactivated successfully. Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.188 [INFO][6243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.191 [INFO][6243] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" iface="eth0" netns="/var/run/netns/cni-460cf441-eee5-991b-0388-36a783774056" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.194 [INFO][6243] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" iface="eth0" netns="/var/run/netns/cni-460cf441-eee5-991b-0388-36a783774056" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.203 [INFO][6243] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" after=10.365228ms iface="eth0" netns="/var/run/netns/cni-460cf441-eee5-991b-0388-36a783774056" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.203 [INFO][6243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.203 [INFO][6243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.337 [INFO][6252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.337 [INFO][6252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.344 [INFO][6252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.517 [INFO][6252] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.517 [INFO][6252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.522 [INFO][6252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:28.557542 containerd[2019]: 2025-06-20 18:25:28.535 [INFO][6243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:25:28.565275 containerd[2019]: time="2025-06-20T18:25:28.560937284Z" level=info msg="TearDown network for sandbox \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" successfully" Jun 20 18:25:28.565275 containerd[2019]: time="2025-06-20T18:25:28.563464100Z" level=info msg="StopPodSandbox for \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" returns successfully" Jun 20 18:25:28.568141 systemd[1]: run-netns-cni\x2d460cf441\x2deee5\x2d991b\x2d0388\x2d36a783774056.mount: Deactivated successfully. Jun 20 18:25:28.592060 containerd[2019]: time="2025-06-20T18:25:28.591874808Z" level=info msg="CreateContainer within sandbox \"28dc7c104225f46acff34d8d0278d4404a4fb4dee03d53d168cff72ee3fa8d34\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"72139cdbdc0beb87c9328ddddaf0802e112de04100b1f8f2252b16cfe44af1fe\"" Jun 20 18:25:28.594202 containerd[2019]: time="2025-06-20T18:25:28.593937944Z" level=info msg="StartContainer for \"72139cdbdc0beb87c9328ddddaf0802e112de04100b1f8f2252b16cfe44af1fe\"" Jun 20 18:25:28.599601 sshd[6265]: Accepted publickey for core from 139.178.68.195 port 35346 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:28.607392 containerd[2019]: time="2025-06-20T18:25:28.607308500Z" level=info msg="connecting to shim 72139cdbdc0beb87c9328ddddaf0802e112de04100b1f8f2252b16cfe44af1fe" address="unix:///run/containerd/s/84c3c38bfaabb913596eb455b93f8ea4bcf734a45bdf1fae9069738e3cd517b3" protocol=ttrpc version=3 Jun 20 18:25:28.610825 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:28.631953 systemd-logind[1989]: New session 11 of user core. Jun 20 18:25:28.638771 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:25:28.697733 systemd[1]: Started cri-containerd-72139cdbdc0beb87c9328ddddaf0802e112de04100b1f8f2252b16cfe44af1fe.scope - libcontainer container 72139cdbdc0beb87c9328ddddaf0802e112de04100b1f8f2252b16cfe44af1fe. Jun 20 18:25:28.748166 kubelet[3506]: I0620 18:25:28.748100 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sps2k\" (UniqueName: \"kubernetes.io/projected/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-kube-api-access-sps2k\") pod \"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7\" (UID: \"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7\") " Jun 20 18:25:28.752083 kubelet[3506]: I0620 18:25:28.748174 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-calico-apiserver-certs\") pod \"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7\" (UID: \"e81c6d23-91be-4c5a-a154-8ce97a0e9ef7\") " Jun 20 18:25:28.766356 kubelet[3506]: I0620 18:25:28.764517 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e81c6d23-91be-4c5a-a154-8ce97a0e9ef7" (UID: "e81c6d23-91be-4c5a-a154-8ce97a0e9ef7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:25:28.765998 systemd[1]: var-lib-kubelet-pods-e81c6d23\x2d91be\x2d4c5a\x2da154\x2d8ce97a0e9ef7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 18:25:28.773516 kubelet[3506]: I0620 18:25:28.773292 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-kube-api-access-sps2k" (OuterVolumeSpecName: "kube-api-access-sps2k") pod "e81c6d23-91be-4c5a-a154-8ce97a0e9ef7" (UID: "e81c6d23-91be-4c5a-a154-8ce97a0e9ef7"). InnerVolumeSpecName "kube-api-access-sps2k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:25:28.780065 systemd[1]: var-lib-kubelet-pods-e81c6d23\x2d91be\x2d4c5a\x2da154\x2d8ce97a0e9ef7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsps2k.mount: Deactivated successfully. Jun 20 18:25:28.828462 kubelet[3506]: I0620 18:25:28.827570 3506 scope.go:117] "RemoveContainer" containerID="786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196" Jun 20 18:25:28.852468 kubelet[3506]: I0620 18:25:28.852269 3506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sps2k\" (UniqueName: \"kubernetes.io/projected/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-kube-api-access-sps2k\") on node \"ip-172-31-21-135\" DevicePath \"\"" Jun 20 18:25:28.852468 kubelet[3506]: I0620 18:25:28.852316 3506 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7-calico-apiserver-certs\") on node \"ip-172-31-21-135\" DevicePath \"\"" Jun 20 18:25:28.853428 containerd[2019]: time="2025-06-20T18:25:28.853279605Z" level=info msg="RemoveContainer for \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\"" Jun 20 18:25:28.864619 systemd[1]: Removed slice kubepods-besteffort-pode81c6d23_91be_4c5a_a154_8ce97a0e9ef7.slice - libcontainer container kubepods-besteffort-pode81c6d23_91be_4c5a_a154_8ce97a0e9ef7.slice. Jun 20 18:25:28.878921 containerd[2019]: time="2025-06-20T18:25:28.878830761Z" level=info msg="RemoveContainer for \"786d772ab1383316f1244ee7ee99deee8588f5b69b67804b634ec430c26c5196\" returns successfully" Jun 20 18:25:29.058532 kubelet[3506]: I0620 18:25:29.058485 3506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e81c6d23-91be-4c5a-a154-8ce97a0e9ef7" path="/var/lib/kubelet/pods/e81c6d23-91be-4c5a-a154-8ce97a0e9ef7/volumes" Jun 20 18:25:29.076414 containerd[2019]: time="2025-06-20T18:25:29.076280550Z" level=info msg="StartContainer for \"72139cdbdc0beb87c9328ddddaf0802e112de04100b1f8f2252b16cfe44af1fe\" returns successfully" Jun 20 18:25:29.100390 sshd[6283]: Connection closed by 139.178.68.195 port 35346 Jun 20 18:25:29.100729 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:29.112771 systemd[1]: sshd@10-172.31.21.135:22-139.178.68.195:35346.service: Deactivated successfully. Jun 20 18:25:29.129393 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:25:29.136668 systemd-logind[1989]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:25:29.144703 systemd-logind[1989]: Removed session 11. Jun 20 18:25:29.186102 kubelet[3506]: I0620 18:25:29.185885 3506 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 18:25:29.186102 kubelet[3506]: I0620 18:25:29.185938 3506 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 18:25:29.483630 containerd[2019]: time="2025-06-20T18:25:29.482704664Z" level=info msg="StopContainer for \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" with timeout 30 (s)" Jun 20 18:25:29.484814 containerd[2019]: time="2025-06-20T18:25:29.484646648Z" level=info msg="Stop container \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" with signal terminated" Jun 20 18:25:29.591820 systemd[1]: cri-containerd-923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87.scope: Deactivated successfully. Jun 20 18:25:29.595148 systemd[1]: cri-containerd-923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87.scope: Consumed 1.921s CPU time, 58.6M memory peak. Jun 20 18:25:29.598722 containerd[2019]: time="2025-06-20T18:25:29.597270081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" id:\"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" pid:5749 exit_status:1 exited_at:{seconds:1750443929 nanos:594657393}" Jun 20 18:25:29.602734 containerd[2019]: time="2025-06-20T18:25:29.602673117Z" level=info msg="received exit event container_id:\"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" id:\"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" pid:5749 exit_status:1 exited_at:{seconds:1750443929 nanos:594657393}" Jun 20 18:25:29.673463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87-rootfs.mount: Deactivated successfully. Jun 20 18:25:29.709532 containerd[2019]: time="2025-06-20T18:25:29.709407141Z" level=info msg="StopContainer for \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" returns successfully" Jun 20 18:25:29.712214 containerd[2019]: time="2025-06-20T18:25:29.711485553Z" level=info msg="StopPodSandbox for \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\"" Jun 20 18:25:29.712214 containerd[2019]: time="2025-06-20T18:25:29.711599841Z" level=info msg="Container to stop \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:25:29.729159 systemd[1]: cri-containerd-39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a.scope: Deactivated successfully. Jun 20 18:25:29.735811 containerd[2019]: time="2025-06-20T18:25:29.735414154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" id:\"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" pid:5213 exit_status:137 exited_at:{seconds:1750443929 nanos:732816441}" Jun 20 18:25:29.791223 containerd[2019]: time="2025-06-20T18:25:29.788810314Z" level=info msg="received exit event sandbox_id:\"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" exit_status:137 exited_at:{seconds:1750443929 nanos:732816441}" Jun 20 18:25:29.794979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a-rootfs.mount: Deactivated successfully. Jun 20 18:25:29.798326 containerd[2019]: time="2025-06-20T18:25:29.796996198Z" level=info msg="shim disconnected" id=39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a namespace=k8s.io Jun 20 18:25:29.798326 containerd[2019]: time="2025-06-20T18:25:29.798010270Z" level=warning msg="cleaning up after shim disconnected" id=39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a namespace=k8s.io Jun 20 18:25:29.798326 containerd[2019]: time="2025-06-20T18:25:29.798077698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:25:29.797292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a-shm.mount: Deactivated successfully. Jun 20 18:25:29.835848 kubelet[3506]: I0620 18:25:29.835774 3506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:25:29.885628 kubelet[3506]: I0620 18:25:29.885478 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4g8ft" podStartSLOduration=28.420723123 podStartE2EDuration="45.88545355s" podCreationTimestamp="2025-06-20 18:24:44 +0000 UTC" firstStartedPulling="2025-06-20 18:25:10.978230188 +0000 UTC m=+54.276811278" lastFinishedPulling="2025-06-20 18:25:28.442960615 +0000 UTC m=+71.741541705" observedRunningTime="2025-06-20 18:25:29.882541066 +0000 UTC m=+73.181122168" watchObservedRunningTime="2025-06-20 18:25:29.88545355 +0000 UTC m=+73.184034688" Jun 20 18:25:29.934408 systemd-networkd[1817]: cali9e3d7078c42: Link DOWN Jun 20 18:25:29.934423 systemd-networkd[1817]: cali9e3d7078c42: Lost carrier Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:29.929 [INFO][6379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:29.931 [INFO][6379] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" iface="eth0" netns="/var/run/netns/cni-180edf28-633b-6a81-ffbb-cc466c1e5eeb" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:29.931 [INFO][6379] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" iface="eth0" netns="/var/run/netns/cni-180edf28-633b-6a81-ffbb-cc466c1e5eeb" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:29.943 [INFO][6379] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" after=11.678785ms iface="eth0" netns="/var/run/netns/cni-180edf28-633b-6a81-ffbb-cc466c1e5eeb" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:29.943 [INFO][6379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:29.943 [INFO][6379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:30.002 [INFO][6397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:30.002 [INFO][6397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:30.002 [INFO][6397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:30.073 [INFO][6397] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:30.073 [INFO][6397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:30.076 [INFO][6397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:25:30.082763 containerd[2019]: 2025-06-20 18:25:30.079 [INFO][6379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:25:30.085029 containerd[2019]: time="2025-06-20T18:25:30.083991103Z" level=info msg="TearDown network for sandbox \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" successfully" Jun 20 18:25:30.085029 containerd[2019]: time="2025-06-20T18:25:30.084032887Z" level=info msg="StopPodSandbox for \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" returns successfully" Jun 20 18:25:30.089973 systemd[1]: run-netns-cni\x2d180edf28\x2d633b\x2d6a81\x2dffbb\x2dcc466c1e5eeb.mount: Deactivated successfully. Jun 20 18:25:30.264403 kubelet[3506]: I0620 18:25:30.263882 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-calico-apiserver-certs\") pod \"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5\" (UID: \"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5\") " Jun 20 18:25:30.264403 kubelet[3506]: I0620 18:25:30.263992 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9cwx\" (UniqueName: \"kubernetes.io/projected/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-kube-api-access-w9cwx\") pod \"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5\" (UID: \"9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5\") " Jun 20 18:25:30.270637 kubelet[3506]: I0620 18:25:30.270584 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5" (UID: "9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:25:30.274379 kubelet[3506]: I0620 18:25:30.272717 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-kube-api-access-w9cwx" (OuterVolumeSpecName: "kube-api-access-w9cwx") pod "9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5" (UID: "9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5"). InnerVolumeSpecName "kube-api-access-w9cwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:25:30.276889 systemd[1]: var-lib-kubelet-pods-9ccf7c28\x2d46e1\x2d4e76\x2db9e7\x2df3d8e8500ea5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw9cwx.mount: Deactivated successfully. Jun 20 18:25:30.277575 systemd[1]: var-lib-kubelet-pods-9ccf7c28\x2d46e1\x2d4e76\x2db9e7\x2df3d8e8500ea5-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 18:25:30.365455 kubelet[3506]: I0620 18:25:30.365246 3506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w9cwx\" (UniqueName: \"kubernetes.io/projected/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-kube-api-access-w9cwx\") on node \"ip-172-31-21-135\" DevicePath \"\"" Jun 20 18:25:30.365455 kubelet[3506]: I0620 18:25:30.365292 3506 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5-calico-apiserver-certs\") on node \"ip-172-31-21-135\" DevicePath \"\"" Jun 20 18:25:30.871523 systemd[1]: Removed slice kubepods-besteffort-pod9ccf7c28_46e1_4e76_b9e7_f3d8e8500ea5.slice - libcontainer container kubepods-besteffort-pod9ccf7c28_46e1_4e76_b9e7_f3d8e8500ea5.slice. Jun 20 18:25:30.871774 systemd[1]: kubepods-besteffort-pod9ccf7c28_46e1_4e76_b9e7_f3d8e8500ea5.slice: Consumed 1.982s CPU time, 58.9M memory peak. Jun 20 18:25:31.047864 kubelet[3506]: I0620 18:25:31.047796 3506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5" path="/var/lib/kubelet/pods/9ccf7c28-46e1-4e76-b9e7-f3d8e8500ea5/volumes" Jun 20 18:25:31.720412 containerd[2019]: time="2025-06-20T18:25:31.720323879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" id:\"dfc4b52b4eabbbae09f214b5e175bc365c453f219b09742cc59e1ce7eba782d9\" pid:6424 exited_at:{seconds:1750443931 nanos:719800655}" Jun 20 18:25:32.367770 ntpd[1981]: Listen normally on 19 cali799ef61c771 [fe80::ecee:eeff:feee:eeee%16]:123 Jun 20 18:25:32.368471 ntpd[1981]: 20 Jun 18:25:32 ntpd[1981]: Listen normally on 19 cali799ef61c771 [fe80::ecee:eeff:feee:eeee%16]:123 Jun 20 18:25:32.368471 ntpd[1981]: 20 Jun 18:25:32 ntpd[1981]: Deleting interface #12 cali9e3d7078c42, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Jun 20 18:25:32.368471 ntpd[1981]: 20 Jun 18:25:32 ntpd[1981]: Deleting interface #17 cali8aecf387b9a, fe80::ecee:eeff:feee:eeee%14#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Jun 20 18:25:32.367842 ntpd[1981]: Deleting interface #12 cali9e3d7078c42, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Jun 20 18:25:32.367879 ntpd[1981]: Deleting interface #17 cali8aecf387b9a, fe80::ecee:eeff:feee:eeee%14#123, interface stats: received=0, sent=0, dropped=0, active_time=16 secs Jun 20 18:25:34.139627 systemd[1]: Started sshd@11-172.31.21.135:22-139.178.68.195:51584.service - OpenSSH per-connection server daemon (139.178.68.195:51584). Jun 20 18:25:34.345500 sshd[6438]: Accepted publickey for core from 139.178.68.195 port 51584 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:34.348320 sshd-session[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:34.356116 systemd-logind[1989]: New session 12 of user core. Jun 20 18:25:34.372655 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:25:34.626060 sshd[6440]: Connection closed by 139.178.68.195 port 51584 Jun 20 18:25:34.627119 sshd-session[6438]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:34.633270 systemd[1]: sshd@11-172.31.21.135:22-139.178.68.195:51584.service: Deactivated successfully. Jun 20 18:25:34.638239 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:25:34.641956 systemd-logind[1989]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:25:34.645993 systemd-logind[1989]: Removed session 12. Jun 20 18:25:34.663950 systemd[1]: Started sshd@12-172.31.21.135:22-139.178.68.195:51592.service - OpenSSH per-connection server daemon (139.178.68.195:51592). Jun 20 18:25:34.868364 sshd[6453]: Accepted publickey for core from 139.178.68.195 port 51592 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:34.871353 sshd-session[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:34.880448 systemd-logind[1989]: New session 13 of user core. Jun 20 18:25:34.886665 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:25:35.238546 sshd[6455]: Connection closed by 139.178.68.195 port 51592 Jun 20 18:25:35.237057 sshd-session[6453]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:35.246322 systemd-logind[1989]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:25:35.251716 systemd[1]: sshd@12-172.31.21.135:22-139.178.68.195:51592.service: Deactivated successfully. Jun 20 18:25:35.257985 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:25:35.288813 systemd[1]: Started sshd@13-172.31.21.135:22-139.178.68.195:51602.service - OpenSSH per-connection server daemon (139.178.68.195:51602). Jun 20 18:25:35.290756 systemd-logind[1989]: Removed session 13. Jun 20 18:25:35.492312 sshd[6467]: Accepted publickey for core from 139.178.68.195 port 51602 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:35.494521 sshd-session[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:35.506991 systemd-logind[1989]: New session 14 of user core. Jun 20 18:25:35.514026 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:25:35.783753 containerd[2019]: time="2025-06-20T18:25:35.783681928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\" id:\"9dc3333e1576c989d19f30972c165c849e278133d8deca622cf97b656229569b\" pid:6484 exited_at:{seconds:1750443935 nanos:782598916}" Jun 20 18:25:35.828288 sshd[6469]: Connection closed by 139.178.68.195 port 51602 Jun 20 18:25:35.830520 sshd-session[6467]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:35.840199 systemd[1]: sshd@13-172.31.21.135:22-139.178.68.195:51602.service: Deactivated successfully. Jun 20 18:25:35.848418 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:25:35.851763 systemd-logind[1989]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:25:35.854934 systemd-logind[1989]: Removed session 14. Jun 20 18:25:37.635168 containerd[2019]: time="2025-06-20T18:25:37.635107973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\" id:\"bc04c8042492f125ac190c69ec7288988c03fa010224ed0f8574d1128d158156\" pid:6519 exited_at:{seconds:1750443937 nanos:634593713}" Jun 20 18:25:40.872006 systemd[1]: Started sshd@14-172.31.21.135:22-139.178.68.195:51606.service - OpenSSH per-connection server daemon (139.178.68.195:51606). Jun 20 18:25:41.077378 sshd[6536]: Accepted publickey for core from 139.178.68.195 port 51606 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:41.080028 sshd-session[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:41.088258 systemd-logind[1989]: New session 15 of user core. Jun 20 18:25:41.095642 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:25:41.386444 sshd[6538]: Connection closed by 139.178.68.195 port 51606 Jun 20 18:25:41.389629 sshd-session[6536]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:41.398589 systemd-logind[1989]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:25:41.400108 systemd[1]: sshd@14-172.31.21.135:22-139.178.68.195:51606.service: Deactivated successfully. Jun 20 18:25:41.408548 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:25:41.415849 systemd-logind[1989]: Removed session 15. Jun 20 18:25:46.429808 systemd[1]: Started sshd@15-172.31.21.135:22-139.178.68.195:60538.service - OpenSSH per-connection server daemon (139.178.68.195:60538). Jun 20 18:25:46.646447 sshd[6557]: Accepted publickey for core from 139.178.68.195 port 60538 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:46.649795 sshd-session[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:46.658431 systemd-logind[1989]: New session 16 of user core. Jun 20 18:25:46.666633 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:25:46.992397 sshd[6559]: Connection closed by 139.178.68.195 port 60538 Jun 20 18:25:46.991008 sshd-session[6557]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:47.003131 systemd[1]: sshd@15-172.31.21.135:22-139.178.68.195:60538.service: Deactivated successfully. Jun 20 18:25:47.016019 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:25:47.022026 systemd-logind[1989]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:25:47.028925 systemd-logind[1989]: Removed session 16. Jun 20 18:25:52.030203 systemd[1]: Started sshd@16-172.31.21.135:22-139.178.68.195:60548.service - OpenSSH per-connection server daemon (139.178.68.195:60548). Jun 20 18:25:52.239848 sshd[6573]: Accepted publickey for core from 139.178.68.195 port 60548 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:52.243156 sshd-session[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:52.253015 systemd-logind[1989]: New session 17 of user core. Jun 20 18:25:52.259645 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:25:52.523773 sshd[6575]: Connection closed by 139.178.68.195 port 60548 Jun 20 18:25:52.524653 sshd-session[6573]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:52.531945 systemd[1]: sshd@16-172.31.21.135:22-139.178.68.195:60548.service: Deactivated successfully. Jun 20 18:25:52.536686 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:25:52.539699 systemd-logind[1989]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:25:52.542981 systemd-logind[1989]: Removed session 17. Jun 20 18:25:53.810695 containerd[2019]: time="2025-06-20T18:25:53.810638925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" id:\"9563f9fe4bd91fd90208ae7856a7d0db10221461f8f689c924234d662e3a4451\" pid:6600 exited_at:{seconds:1750443953 nanos:809768361}" Jun 20 18:25:56.802802 containerd[2019]: time="2025-06-20T18:25:56.802687308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\" id:\"5881de75bfa46cd901d48bac853c43091420eaba01c58c93de7a47cff748ec34\" pid:6625 exited_at:{seconds:1750443956 nanos:801947196}" Jun 20 18:25:57.562429 systemd[1]: Started sshd@17-172.31.21.135:22-139.178.68.195:57508.service - OpenSSH per-connection server daemon (139.178.68.195:57508). Jun 20 18:25:57.771823 sshd[6637]: Accepted publickey for core from 139.178.68.195 port 57508 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:57.775085 sshd-session[6637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:57.784462 systemd-logind[1989]: New session 18 of user core. Jun 20 18:25:57.791607 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:25:58.044121 sshd[6639]: Connection closed by 139.178.68.195 port 57508 Jun 20 18:25:58.045079 sshd-session[6637]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:58.052015 systemd[1]: sshd@17-172.31.21.135:22-139.178.68.195:57508.service: Deactivated successfully. Jun 20 18:25:58.055905 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:25:58.058110 systemd-logind[1989]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:25:58.062650 systemd-logind[1989]: Removed session 18. Jun 20 18:25:58.081115 systemd[1]: Started sshd@18-172.31.21.135:22-139.178.68.195:57520.service - OpenSSH per-connection server daemon (139.178.68.195:57520). Jun 20 18:25:58.282666 sshd[6650]: Accepted publickey for core from 139.178.68.195 port 57520 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:58.286590 sshd-session[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:58.294992 systemd-logind[1989]: New session 19 of user core. Jun 20 18:25:58.311938 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:25:58.875123 sshd[6652]: Connection closed by 139.178.68.195 port 57520 Jun 20 18:25:58.876080 sshd-session[6650]: pam_unix(sshd:session): session closed for user core Jun 20 18:25:58.885377 systemd-logind[1989]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:25:58.888692 systemd[1]: sshd@18-172.31.21.135:22-139.178.68.195:57520.service: Deactivated successfully. Jun 20 18:25:58.892892 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:25:58.910887 systemd-logind[1989]: Removed session 19. Jun 20 18:25:58.911789 systemd[1]: Started sshd@19-172.31.21.135:22-139.178.68.195:57532.service - OpenSSH per-connection server daemon (139.178.68.195:57532). Jun 20 18:25:59.114943 sshd[6662]: Accepted publickey for core from 139.178.68.195 port 57532 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:25:59.117838 sshd-session[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:59.130437 systemd-logind[1989]: New session 20 of user core. Jun 20 18:25:59.138668 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:26:00.492389 sshd[6664]: Connection closed by 139.178.68.195 port 57532 Jun 20 18:26:00.493585 sshd-session[6662]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:00.507274 systemd[1]: sshd@19-172.31.21.135:22-139.178.68.195:57532.service: Deactivated successfully. Jun 20 18:26:00.513647 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:26:00.519013 systemd-logind[1989]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:26:00.552527 systemd[1]: Started sshd@20-172.31.21.135:22-139.178.68.195:57544.service - OpenSSH per-connection server daemon (139.178.68.195:57544). Jun 20 18:26:00.559809 systemd-logind[1989]: Removed session 20. Jun 20 18:26:00.762140 sshd[6681]: Accepted publickey for core from 139.178.68.195 port 57544 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:00.764319 sshd-session[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:00.774424 systemd-logind[1989]: New session 21 of user core. Jun 20 18:26:00.779680 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:26:01.342171 sshd[6683]: Connection closed by 139.178.68.195 port 57544 Jun 20 18:26:01.343312 sshd-session[6681]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:01.351582 systemd-logind[1989]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:26:01.353242 systemd[1]: sshd@20-172.31.21.135:22-139.178.68.195:57544.service: Deactivated successfully. Jun 20 18:26:01.358733 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:26:01.378277 systemd-logind[1989]: Removed session 21. Jun 20 18:26:01.379649 systemd[1]: Started sshd@21-172.31.21.135:22-139.178.68.195:57548.service - OpenSSH per-connection server daemon (139.178.68.195:57548). Jun 20 18:26:01.582190 sshd[6693]: Accepted publickey for core from 139.178.68.195 port 57548 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:01.584724 sshd-session[6693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:01.594168 systemd-logind[1989]: New session 22 of user core. Jun 20 18:26:01.601629 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:26:01.850218 sshd[6695]: Connection closed by 139.178.68.195 port 57548 Jun 20 18:26:01.851229 sshd-session[6693]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:01.859154 systemd[1]: sshd@21-172.31.21.135:22-139.178.68.195:57548.service: Deactivated successfully. Jun 20 18:26:01.863542 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:26:01.865956 systemd-logind[1989]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:26:01.870234 systemd-logind[1989]: Removed session 22. Jun 20 18:26:05.740369 containerd[2019]: time="2025-06-20T18:26:05.740221760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\" id:\"881fc6804851d55afce59b3931f2fbcd256a8c50981bb409ab88a8fcfcfe3571\" pid:6717 exited_at:{seconds:1750443965 nanos:739025300}" Jun 20 18:26:06.892685 systemd[1]: Started sshd@22-172.31.21.135:22-139.178.68.195:47502.service - OpenSSH per-connection server daemon (139.178.68.195:47502). Jun 20 18:26:07.108253 sshd[6730]: Accepted publickey for core from 139.178.68.195 port 47502 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:07.113748 sshd-session[6730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:07.128422 systemd-logind[1989]: New session 23 of user core. Jun 20 18:26:07.137687 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:26:07.401665 sshd[6733]: Connection closed by 139.178.68.195 port 47502 Jun 20 18:26:07.402696 sshd-session[6730]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:07.410668 systemd-logind[1989]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:26:07.412094 systemd[1]: sshd@22-172.31.21.135:22-139.178.68.195:47502.service: Deactivated successfully. Jun 20 18:26:07.415727 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:26:07.419171 systemd-logind[1989]: Removed session 23. Jun 20 18:26:12.444650 systemd[1]: Started sshd@23-172.31.21.135:22-139.178.68.195:47512.service - OpenSSH per-connection server daemon (139.178.68.195:47512). Jun 20 18:26:12.681260 sshd[6749]: Accepted publickey for core from 139.178.68.195 port 47512 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:12.686097 sshd-session[6749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:12.700917 systemd-logind[1989]: New session 24 of user core. Jun 20 18:26:12.706725 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:26:13.009551 sshd[6751]: Connection closed by 139.178.68.195 port 47512 Jun 20 18:26:13.010637 sshd-session[6749]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:13.018472 systemd[1]: sshd@23-172.31.21.135:22-139.178.68.195:47512.service: Deactivated successfully. Jun 20 18:26:13.019557 systemd-logind[1989]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:26:13.026710 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:26:13.035113 systemd-logind[1989]: Removed session 24. Jun 20 18:26:16.990012 kubelet[3506]: I0620 18:26:16.989957 3506 scope.go:117] "RemoveContainer" containerID="923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87" Jun 20 18:26:16.997032 containerd[2019]: time="2025-06-20T18:26:16.995574680Z" level=info msg="RemoveContainer for \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\"" Jun 20 18:26:17.006783 containerd[2019]: time="2025-06-20T18:26:17.006617884Z" level=info msg="RemoveContainer for \"923f501fa6bf9c31cfc3f2c1f4d4d15d690fd919994b23bc3490f4f7f134bc87\" returns successfully" Jun 20 18:26:17.012573 containerd[2019]: time="2025-06-20T18:26:17.012484696Z" level=info msg="StopPodSandbox for \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\"" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.168 [WARNING][6770] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.170 [INFO][6770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.171 [INFO][6770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" iface="eth0" netns="" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.172 [INFO][6770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.173 [INFO][6770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.272 [INFO][6779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.273 [INFO][6779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.274 [INFO][6779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.295 [WARNING][6779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.296 [INFO][6779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.303 [INFO][6779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:26:17.313846 containerd[2019]: 2025-06-20 18:26:17.308 [INFO][6770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.314772 containerd[2019]: time="2025-06-20T18:26:17.313952130Z" level=info msg="TearDown network for sandbox \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" successfully" Jun 20 18:26:17.314772 containerd[2019]: time="2025-06-20T18:26:17.313992318Z" level=info msg="StopPodSandbox for \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" returns successfully" Jun 20 18:26:17.315718 containerd[2019]: time="2025-06-20T18:26:17.315672198Z" level=info msg="RemovePodSandbox for \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\"" Jun 20 18:26:17.316650 containerd[2019]: time="2025-06-20T18:26:17.316122162Z" level=info msg="Forcibly stopping sandbox \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\"" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.415 [WARNING][6793] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.416 [INFO][6793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.416 [INFO][6793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" iface="eth0" netns="" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.417 [INFO][6793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.418 [INFO][6793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.514 [INFO][6800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.515 [INFO][6800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.518 [INFO][6800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.539 [WARNING][6800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.539 [INFO][6800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" HandleID="k8s-pod-network.edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--9nwnw-eth0" Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.546 [INFO][6800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:26:17.568466 containerd[2019]: 2025-06-20 18:26:17.557 [INFO][6793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf" Jun 20 18:26:17.568466 containerd[2019]: time="2025-06-20T18:26:17.568003123Z" level=info msg="TearDown network for sandbox \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" successfully" Jun 20 18:26:17.577223 containerd[2019]: time="2025-06-20T18:26:17.577156039Z" level=info msg="Ensure that sandbox edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf in task-service has been cleanup successfully" Jun 20 18:26:17.585096 containerd[2019]: time="2025-06-20T18:26:17.585021619Z" level=info msg="RemovePodSandbox \"edb6db0e5c7b8ade1f7a50a9741e19f7a2c01bbcb3aecb70274cb5b068a87bcf\" returns successfully" Jun 20 18:26:17.586935 containerd[2019]: time="2025-06-20T18:26:17.586855015Z" level=info msg="StopPodSandbox for \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\"" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.690 [WARNING][6815] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.691 [INFO][6815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.691 [INFO][6815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" iface="eth0" netns="" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.691 [INFO][6815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.691 [INFO][6815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.751 [INFO][6823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.754 [INFO][6823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.754 [INFO][6823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.769 [WARNING][6823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.769 [INFO][6823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.772 [INFO][6823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:26:17.780969 containerd[2019]: 2025-06-20 18:26:17.776 [INFO][6815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.782417 containerd[2019]: time="2025-06-20T18:26:17.781075232Z" level=info msg="TearDown network for sandbox \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" successfully" Jun 20 18:26:17.782417 containerd[2019]: time="2025-06-20T18:26:17.781116788Z" level=info msg="StopPodSandbox for \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" returns successfully" Jun 20 18:26:17.782417 containerd[2019]: time="2025-06-20T18:26:17.782085440Z" level=info msg="RemovePodSandbox for \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\"" Jun 20 18:26:17.782417 containerd[2019]: time="2025-06-20T18:26:17.782134136Z" level=info msg="Forcibly stopping sandbox \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\"" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.867 [WARNING][6837] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" WorkloadEndpoint="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.868 [INFO][6837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.868 [INFO][6837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" iface="eth0" netns="" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.868 [INFO][6837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.868 [INFO][6837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.914 [INFO][6844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.915 [INFO][6844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.915 [INFO][6844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.931 [WARNING][6844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.931 [INFO][6844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" HandleID="k8s-pod-network.39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Workload="ip--172--31--21--135-k8s-calico--apiserver--65fd8789dd--k4vsj-eth0" Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.934 [INFO][6844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:26:17.941844 containerd[2019]: 2025-06-20 18:26:17.937 [INFO][6837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a" Jun 20 18:26:17.942528 containerd[2019]: time="2025-06-20T18:26:17.941939865Z" level=info msg="TearDown network for sandbox \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" successfully" Jun 20 18:26:17.947375 containerd[2019]: time="2025-06-20T18:26:17.946825749Z" level=info msg="Ensure that sandbox 39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a in task-service has been cleanup successfully" Jun 20 18:26:17.953968 containerd[2019]: time="2025-06-20T18:26:17.953864445Z" level=info msg="RemovePodSandbox \"39e832593a4b6996057d8dcbb375ab6f5574e38ebd58dd585006cd0f4cfaed7a\" returns successfully" Jun 20 18:26:18.052787 systemd[1]: Started sshd@24-172.31.21.135:22-139.178.68.195:41842.service - OpenSSH per-connection server daemon (139.178.68.195:41842). Jun 20 18:26:18.303280 sshd[6852]: Accepted publickey for core from 139.178.68.195 port 41842 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:18.309704 sshd-session[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:18.328582 systemd-logind[1989]: New session 25 of user core. Jun 20 18:26:18.335632 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:26:18.713521 sshd[6854]: Connection closed by 139.178.68.195 port 41842 Jun 20 18:26:18.713663 sshd-session[6852]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:18.722547 systemd[1]: sshd@24-172.31.21.135:22-139.178.68.195:41842.service: Deactivated successfully. Jun 20 18:26:18.728760 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:26:18.733120 systemd-logind[1989]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:26:18.738401 systemd-logind[1989]: Removed session 25. Jun 20 18:26:23.760416 systemd[1]: Started sshd@25-172.31.21.135:22-139.178.68.195:34638.service - OpenSSH per-connection server daemon (139.178.68.195:34638). Jun 20 18:26:23.973081 sshd[6887]: Accepted publickey for core from 139.178.68.195 port 34638 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:23.977145 sshd-session[6887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:23.989837 systemd-logind[1989]: New session 26 of user core. Jun 20 18:26:23.999622 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:26:24.150106 containerd[2019]: time="2025-06-20T18:26:24.149914668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" id:\"50fdcd4687ec3bf2122dd9bf0078cff72caef901b9749e3242bdf2906c1e67e8\" pid:6880 exited_at:{seconds:1750443984 nanos:149153280}" Jun 20 18:26:24.314792 sshd[6893]: Connection closed by 139.178.68.195 port 34638 Jun 20 18:26:24.315968 sshd-session[6887]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:24.328705 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:26:24.335193 systemd[1]: sshd@25-172.31.21.135:22-139.178.68.195:34638.service: Deactivated successfully. Jun 20 18:26:24.346725 systemd-logind[1989]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:26:24.350689 systemd-logind[1989]: Removed session 26. Jun 20 18:26:26.816263 containerd[2019]: time="2025-06-20T18:26:26.816095213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\" id:\"c3b9f3c66272c15777ef217f7a496ae93a0f35a3d4c28afe8a11aef27f0a3805\" pid:6925 exited_at:{seconds:1750443986 nanos:815665109}" Jun 20 18:26:29.356042 systemd[1]: Started sshd@26-172.31.21.135:22-139.178.68.195:34652.service - OpenSSH per-connection server daemon (139.178.68.195:34652). Jun 20 18:26:29.568374 sshd[6935]: Accepted publickey for core from 139.178.68.195 port 34652 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:29.570677 sshd-session[6935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:29.581856 systemd-logind[1989]: New session 27 of user core. Jun 20 18:26:29.594685 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:26:29.880560 sshd[6937]: Connection closed by 139.178.68.195 port 34652 Jun 20 18:26:29.881641 sshd-session[6935]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:29.890107 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:26:29.892318 systemd[1]: sshd@26-172.31.21.135:22-139.178.68.195:34652.service: Deactivated successfully. Jun 20 18:26:29.904205 systemd-logind[1989]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:26:29.910492 systemd-logind[1989]: Removed session 27. Jun 20 18:26:31.662172 containerd[2019]: time="2025-06-20T18:26:31.662108745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" id:\"215a4e12f37559d9896a8c423aeab16f512ef7f91f6b1d4e99c1a2aa3106fc47\" pid:6961 exited_at:{seconds:1750443991 nanos:661613505}" Jun 20 18:26:34.922805 systemd[1]: Started sshd@27-172.31.21.135:22-139.178.68.195:49606.service - OpenSSH per-connection server daemon (139.178.68.195:49606). Jun 20 18:26:35.130836 sshd[6973]: Accepted publickey for core from 139.178.68.195 port 49606 ssh2: RSA SHA256:skNCy3KG09T4cc3lQ0Jm6LzYT72UfVverdzX6mhfhaQ Jun 20 18:26:35.135139 sshd-session[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:35.144983 systemd-logind[1989]: New session 28 of user core. Jun 20 18:26:35.153679 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 18:26:35.461948 sshd[6975]: Connection closed by 139.178.68.195 port 49606 Jun 20 18:26:35.462930 sshd-session[6973]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:35.472431 systemd[1]: sshd@27-172.31.21.135:22-139.178.68.195:49606.service: Deactivated successfully. Jun 20 18:26:35.478093 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 18:26:35.481846 systemd-logind[1989]: Session 28 logged out. Waiting for processes to exit. Jun 20 18:26:35.492636 systemd-logind[1989]: Removed session 28. Jun 20 18:26:35.780834 containerd[2019]: time="2025-06-20T18:26:35.780760454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\" id:\"b77b151df79b72a5e5a4d6bad664f1b5ca81a45dd235eef4dbb5e88db007cba4\" pid:6999 exited_at:{seconds:1750443995 nanos:778325126}" Jun 20 18:26:37.643930 containerd[2019]: time="2025-06-20T18:26:37.643325247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\" id:\"7e61703db6c4ccdfe73b0156fcc3c10ed00d968e284e884326c27ec9f25feef5\" pid:7035 exited_at:{seconds:1750443997 nanos:642948159}" Jun 20 18:26:50.100210 kubelet[3506]: E0620 18:26:50.100008 3506 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-135?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 20 18:26:50.290245 systemd[1]: cri-containerd-a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae.scope: Deactivated successfully. Jun 20 18:26:50.292984 systemd[1]: cri-containerd-a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae.scope: Consumed 22.700s CPU time, 112.9M memory peak, 508K read from disk. Jun 20 18:26:50.296088 containerd[2019]: time="2025-06-20T18:26:50.295525754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\" id:\"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\" pid:3829 exit_status:1 exited_at:{seconds:1750444010 nanos:294709958}" Jun 20 18:26:50.296088 containerd[2019]: time="2025-06-20T18:26:50.295700786Z" level=info msg="received exit event container_id:\"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\" id:\"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\" pid:3829 exit_status:1 exited_at:{seconds:1750444010 nanos:294709958}" Jun 20 18:26:50.335519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae-rootfs.mount: Deactivated successfully. Jun 20 18:26:50.508684 systemd[1]: cri-containerd-5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff.scope: Deactivated successfully. Jun 20 18:26:50.509241 systemd[1]: cri-containerd-5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff.scope: Consumed 5.352s CPU time, 58M memory peak, 896K read from disk. Jun 20 18:26:50.522380 containerd[2019]: time="2025-06-20T18:26:50.522087975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff\" id:\"5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff\" pid:3162 exit_status:1 exited_at:{seconds:1750444010 nanos:521431863}" Jun 20 18:26:50.522380 containerd[2019]: time="2025-06-20T18:26:50.522265923Z" level=info msg="received exit event container_id:\"5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff\" id:\"5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff\" pid:3162 exit_status:1 exited_at:{seconds:1750444010 nanos:521431863}" Jun 20 18:26:50.570659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff-rootfs.mount: Deactivated successfully. Jun 20 18:26:51.186204 kubelet[3506]: I0620 18:26:51.185841 3506 scope.go:117] "RemoveContainer" containerID="5523462888cdb6533b398d251f06164273cc8638de23bd739807c68fd37a81ff" Jun 20 18:26:51.197941 kubelet[3506]: I0620 18:26:51.197905 3506 scope.go:117] "RemoveContainer" containerID="a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae" Jun 20 18:26:51.211840 containerd[2019]: time="2025-06-20T18:26:51.211772450Z" level=info msg="CreateContainer within sandbox \"af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 18:26:51.212386 containerd[2019]: time="2025-06-20T18:26:51.212303714Z" level=info msg="CreateContainer within sandbox \"b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 20 18:26:51.230485 containerd[2019]: time="2025-06-20T18:26:51.227920790Z" level=info msg="Container 93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:51.236619 containerd[2019]: time="2025-06-20T18:26:51.236465078Z" level=info msg="Container dc842d8c596bcf3cf599d9cc956c64a6aa5c5e616d29d1d9bee17bd284cea359: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:51.248794 containerd[2019]: time="2025-06-20T18:26:51.248726918Z" level=info msg="CreateContainer within sandbox \"b029f2330ab3e840ee3efbcfe04f55a0d5c7017d242421ab235ef8b5161c6ef9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae\"" Jun 20 18:26:51.249766 containerd[2019]: time="2025-06-20T18:26:51.249639338Z" level=info msg="StartContainer for \"93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae\"" Jun 20 18:26:51.252259 containerd[2019]: time="2025-06-20T18:26:51.252120830Z" level=info msg="connecting to shim 93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae" address="unix:///run/containerd/s/150dc55f939ff0d8c8a6980d86f8b3fe1aea384beadeb6216a37d4fc5335f1db" protocol=ttrpc version=3 Jun 20 18:26:51.261095 containerd[2019]: time="2025-06-20T18:26:51.260711822Z" level=info msg="CreateContainer within sandbox \"af71e080b20a89bab08cca71edfe47b0331beb0e2b92610ab7ce33ef3d14476b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"dc842d8c596bcf3cf599d9cc956c64a6aa5c5e616d29d1d9bee17bd284cea359\"" Jun 20 18:26:51.264374 containerd[2019]: time="2025-06-20T18:26:51.261916046Z" level=info msg="StartContainer for \"dc842d8c596bcf3cf599d9cc956c64a6aa5c5e616d29d1d9bee17bd284cea359\"" Jun 20 18:26:51.271193 containerd[2019]: time="2025-06-20T18:26:51.269861630Z" level=info msg="connecting to shim dc842d8c596bcf3cf599d9cc956c64a6aa5c5e616d29d1d9bee17bd284cea359" address="unix:///run/containerd/s/3da8125aa6f0057d6a80e60d049035b3c3a00a573c46098f9c19976dd9d409fe" protocol=ttrpc version=3 Jun 20 18:26:51.305991 systemd[1]: Started cri-containerd-93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae.scope - libcontainer container 93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae. Jun 20 18:26:51.332894 systemd[1]: Started cri-containerd-dc842d8c596bcf3cf599d9cc956c64a6aa5c5e616d29d1d9bee17bd284cea359.scope - libcontainer container dc842d8c596bcf3cf599d9cc956c64a6aa5c5e616d29d1d9bee17bd284cea359. Jun 20 18:26:51.412790 containerd[2019]: time="2025-06-20T18:26:51.412649271Z" level=info msg="StartContainer for \"93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae\" returns successfully" Jun 20 18:26:51.460095 containerd[2019]: time="2025-06-20T18:26:51.459915699Z" level=info msg="StartContainer for \"dc842d8c596bcf3cf599d9cc956c64a6aa5c5e616d29d1d9bee17bd284cea359\" returns successfully" Jun 20 18:26:53.827474 containerd[2019]: time="2025-06-20T18:26:53.827321011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6622c6d92e4663a04fd51411e4f4a77d133ece0726f03a162d18f1cec8235e09\" id:\"bb4343052f2160723925251dc87964a441764b944e9fc9757ef76ae1edfc6279\" pid:7154 exited_at:{seconds:1750444013 nanos:826223671}" Jun 20 18:26:55.993662 systemd[1]: cri-containerd-9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67.scope: Deactivated successfully. Jun 20 18:26:55.994834 systemd[1]: cri-containerd-9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67.scope: Consumed 5.534s CPU time, 20.8M memory peak, 84K read from disk. Jun 20 18:26:56.002603 containerd[2019]: time="2025-06-20T18:26:56.002259522Z" level=info msg="received exit event container_id:\"9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67\" id:\"9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67\" pid:3155 exit_status:1 exited_at:{seconds:1750444016 nanos:1521882}" Jun 20 18:26:56.006072 containerd[2019]: time="2025-06-20T18:26:56.005668770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67\" id:\"9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67\" pid:3155 exit_status:1 exited_at:{seconds:1750444016 nanos:1521882}" Jun 20 18:26:56.061947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67-rootfs.mount: Deactivated successfully. Jun 20 18:26:56.235849 kubelet[3506]: I0620 18:26:56.235739 3506 scope.go:117] "RemoveContainer" containerID="9246006e0abd694f2a45f6ddd69ef7a2ce85f672f9b60a0ee29bebd8126d7b67" Jun 20 18:26:56.245406 containerd[2019]: time="2025-06-20T18:26:56.243702667Z" level=info msg="CreateContainer within sandbox \"85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 18:26:56.262249 containerd[2019]: time="2025-06-20T18:26:56.262176223Z" level=info msg="Container 177e009bd5275fee6c8c830cf3807eef68c62508d3d76c8a5753b8768187b78d: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:56.280791 containerd[2019]: time="2025-06-20T18:26:56.280717627Z" level=info msg="CreateContainer within sandbox \"85789cebcd1fddbc9ec8bea7b2a6e8c12d5879ce3d824de943d61ccd648b43f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"177e009bd5275fee6c8c830cf3807eef68c62508d3d76c8a5753b8768187b78d\"" Jun 20 18:26:56.281878 containerd[2019]: time="2025-06-20T18:26:56.281832751Z" level=info msg="StartContainer for \"177e009bd5275fee6c8c830cf3807eef68c62508d3d76c8a5753b8768187b78d\"" Jun 20 18:26:56.284508 containerd[2019]: time="2025-06-20T18:26:56.284388823Z" level=info msg="connecting to shim 177e009bd5275fee6c8c830cf3807eef68c62508d3d76c8a5753b8768187b78d" address="unix:///run/containerd/s/cb7d31111904d8f2c972b93e94caf2cfb76e582d976bd4336457239fc3ae88e7" protocol=ttrpc version=3 Jun 20 18:26:56.328644 systemd[1]: Started cri-containerd-177e009bd5275fee6c8c830cf3807eef68c62508d3d76c8a5753b8768187b78d.scope - libcontainer container 177e009bd5275fee6c8c830cf3807eef68c62508d3d76c8a5753b8768187b78d. Jun 20 18:26:56.417587 containerd[2019]: time="2025-06-20T18:26:56.417431300Z" level=info msg="StartContainer for \"177e009bd5275fee6c8c830cf3807eef68c62508d3d76c8a5753b8768187b78d\" returns successfully" Jun 20 18:26:56.802048 containerd[2019]: time="2025-06-20T18:26:56.801986098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eaf68496ba71348c977f1c17dcf7eb9e85d97ce74fc23514a726a18ff9fc690\" id:\"1446ba95c3133917a18846519272eae518990c132517f00a734627163d40e200\" pid:7226 exit_status:1 exited_at:{seconds:1750444016 nanos:800810602}" Jun 20 18:27:00.100865 kubelet[3506]: E0620 18:27:00.100786 3506 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-135?timeout=10s\": context deadline exceeded" Jun 20 18:27:02.879085 systemd[1]: cri-containerd-93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae.scope: Deactivated successfully. Jun 20 18:27:02.882067 containerd[2019]: time="2025-06-20T18:27:02.881989060Z" level=info msg="received exit event container_id:\"93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae\" id:\"93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae\" pid:7100 exit_status:1 exited_at:{seconds:1750444022 nanos:880774768}" Jun 20 18:27:02.883570 containerd[2019]: time="2025-06-20T18:27:02.882538288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae\" id:\"93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae\" pid:7100 exit_status:1 exited_at:{seconds:1750444022 nanos:880774768}" Jun 20 18:27:02.922683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae-rootfs.mount: Deactivated successfully. Jun 20 18:27:03.276455 kubelet[3506]: I0620 18:27:03.276421 3506 scope.go:117] "RemoveContainer" containerID="a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae" Jun 20 18:27:03.277420 kubelet[3506]: I0620 18:27:03.277012 3506 scope.go:117] "RemoveContainer" containerID="93082b59d3b292d009edca9edbe4274a1f67c4c459ef8faa046df9431bd261ae" Jun 20 18:27:03.277420 kubelet[3506]: E0620 18:27:03.277301 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-68f7c7984d-5swd4_tigera-operator(a108c235-2a29-4475-9bdf-9df7e15f7776)\"" pod="tigera-operator/tigera-operator-68f7c7984d-5swd4" podUID="a108c235-2a29-4475-9bdf-9df7e15f7776" Jun 20 18:27:03.281153 containerd[2019]: time="2025-06-20T18:27:03.281109710Z" level=info msg="RemoveContainer for \"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\"" Jun 20 18:27:03.291926 containerd[2019]: time="2025-06-20T18:27:03.291776366Z" level=info msg="RemoveContainer for \"a5c9a2e9e21fed2a933050728c880dbd2a5c21001ff426791d2c2e5908eb87ae\" returns successfully" Jun 20 18:27:05.716493 containerd[2019]: time="2025-06-20T18:27:05.716422182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2084fc6c747d0f179376b80c7db05a0f9de0f3ff40f1fbc24d651dc8af1414bb\" id:\"6267a1c3471c30e46bb9464f9ea9facc1e8962a68165a4d3eb29ec4e003df908\" pid:7263 exited_at:{seconds:1750444025 nanos:715906998}" Jun 20 18:27:10.102368 kubelet[3506]: E0620 18:27:10.101553 3506 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-135?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"