May 9 23:59:47.191459 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 9 23:59:47.191504 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:59:47.191530 kernel: KASLR disabled due to lack of seed May 9 23:59:47.191546 kernel: efi: EFI v2.7 by EDK II May 9 23:59:47.191563 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 9 23:59:47.191579 kernel: ACPI: Early table checksum verification disabled May 9 23:59:47.191598 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 9 23:59:47.191614 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 9 23:59:47.191630 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 9 23:59:47.191646 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 9 23:59:47.191667 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 9 23:59:47.191684 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 9 23:59:47.191701 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 9 23:59:47.191717 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 9 23:59:47.191736 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 9 23:59:47.191757 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 9 23:59:47.191775 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 9 23:59:47.191792 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 9 23:59:47.191809 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 9 23:59:47.191826 kernel: printk: bootconsole [uart0] enabled May 9 23:59:47.191844 kernel: NUMA: Failed to initialise from firmware May 9 23:59:47.191861 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:59:47.191878 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 9 23:59:47.191895 kernel: Zone ranges: May 9 23:59:47.191912 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 9 23:59:47.191929 kernel: DMA32 empty May 9 23:59:47.191950 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 9 23:59:47.191967 kernel: Movable zone start for each node May 9 23:59:47.192001 kernel: Early memory node ranges May 9 23:59:47.192024 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 9 23:59:47.192043 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 9 23:59:47.192060 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 9 23:59:47.192077 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 9 23:59:47.192094 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 9 23:59:47.192111 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 9 23:59:47.192128 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 9 23:59:47.192145 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 9 23:59:47.192162 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:59:47.192184 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 9 23:59:47.192202 kernel: psci: probing for conduit method from ACPI. May 9 23:59:47.192226 kernel: psci: PSCIv1.0 detected in firmware. May 9 23:59:47.192244 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:59:47.192292 kernel: psci: Trusted OS migration not required May 9 23:59:47.192319 kernel: psci: SMC Calling Convention v1.1 May 9 23:59:47.192337 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:59:47.192355 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:59:47.192374 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:59:47.192392 kernel: Detected PIPT I-cache on CPU0 May 9 23:59:47.192409 kernel: CPU features: detected: GIC system register CPU interface May 9 23:59:47.192428 kernel: CPU features: detected: Spectre-v2 May 9 23:59:47.192446 kernel: CPU features: detected: Spectre-v3a May 9 23:59:47.192464 kernel: CPU features: detected: Spectre-BHB May 9 23:59:47.192482 kernel: CPU features: detected: ARM erratum 1742098 May 9 23:59:47.192500 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 9 23:59:47.192522 kernel: alternatives: applying boot alternatives May 9 23:59:47.192543 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:47.192563 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:59:47.192582 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:59:47.192600 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:59:47.192650 kernel: Fallback order for Node 0: 0 May 9 23:59:47.192683 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 9 23:59:47.192702 kernel: Policy zone: Normal May 9 23:59:47.192721 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:59:47.192739 kernel: software IO TLB: area num 2. May 9 23:59:47.192757 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 9 23:59:47.192783 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) May 9 23:59:47.192801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:59:47.192819 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:59:47.192838 kernel: rcu: RCU event tracing is enabled. May 9 23:59:47.192857 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:59:47.192875 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:59:47.192894 kernel: Tracing variant of Tasks RCU enabled. May 9 23:59:47.192912 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:59:47.192930 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:59:47.192948 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:59:47.192965 kernel: GICv3: 96 SPIs implemented May 9 23:59:47.192988 kernel: GICv3: 0 Extended SPIs implemented May 9 23:59:47.193006 kernel: Root IRQ handler: gic_handle_irq May 9 23:59:47.193024 kernel: GICv3: GICv3 features: 16 PPIs May 9 23:59:47.193042 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 9 23:59:47.193060 kernel: ITS [mem 0x10080000-0x1009ffff] May 9 23:59:47.193078 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:59:47.193097 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 9 23:59:47.193115 kernel: GICv3: using LPI property table @0x00000004000d0000 May 9 23:59:47.193133 kernel: ITS: Using hypervisor restricted LPI range [128] May 9 23:59:47.193151 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 9 23:59:47.193169 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:59:47.193187 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 9 23:59:47.193210 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 9 23:59:47.193228 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 9 23:59:47.193246 kernel: Console: colour dummy device 80x25 May 9 23:59:47.193305 kernel: printk: console [tty1] enabled May 9 23:59:47.193325 kernel: ACPI: Core revision 20230628 May 9 23:59:47.193344 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 9 23:59:47.193363 kernel: pid_max: default: 32768 minimum: 301 May 9 23:59:47.193383 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:59:47.193401 kernel: landlock: Up and running. May 9 23:59:47.193427 kernel: SELinux: Initializing. May 9 23:59:47.193446 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:47.193465 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:47.193484 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:47.193503 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:47.193521 kernel: rcu: Hierarchical SRCU implementation. May 9 23:59:47.193541 kernel: rcu: Max phase no-delay instances is 400. May 9 23:59:47.193560 kernel: Platform MSI: ITS@0x10080000 domain created May 9 23:59:47.193578 kernel: PCI/MSI: ITS@0x10080000 domain created May 9 23:59:47.193601 kernel: Remapping and enabling EFI services. May 9 23:59:47.193620 kernel: smp: Bringing up secondary CPUs ... May 9 23:59:47.193639 kernel: Detected PIPT I-cache on CPU1 May 9 23:59:47.193658 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 9 23:59:47.193677 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 9 23:59:47.193695 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 9 23:59:47.193714 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:59:47.193732 kernel: SMP: Total of 2 processors activated. May 9 23:59:47.193750 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:59:47.193773 kernel: CPU features: detected: 32-bit EL1 Support May 9 23:59:47.193792 kernel: CPU features: detected: CRC32 instructions May 9 23:59:47.193811 kernel: CPU: All CPU(s) started at EL1 May 9 23:59:47.193842 kernel: alternatives: applying system-wide alternatives May 9 23:59:47.193865 kernel: devtmpfs: initialized May 9 23:59:47.193885 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:59:47.193905 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:59:47.193924 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:59:47.193943 kernel: SMBIOS 3.0.0 present. May 9 23:59:47.193963 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 9 23:59:47.193987 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:59:47.194007 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:59:47.194027 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:59:47.194047 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:59:47.194066 kernel: audit: initializing netlink subsys (disabled) May 9 23:59:47.194086 kernel: audit: type=2000 audit(0.285:1): state=initialized audit_enabled=0 res=1 May 9 23:59:47.194105 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:59:47.194129 kernel: cpuidle: using governor menu May 9 23:59:47.194149 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:59:47.194169 kernel: ASID allocator initialised with 65536 entries May 9 23:59:47.194189 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:59:47.194208 kernel: Serial: AMBA PL011 UART driver May 9 23:59:47.194227 kernel: Modules: 17488 pages in range for non-PLT usage May 9 23:59:47.194247 kernel: Modules: 509008 pages in range for PLT usage May 9 23:59:47.194295 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:59:47.194316 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:59:47.194342 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:59:47.194361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:59:47.194381 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:59:47.194400 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:59:47.194419 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:59:47.194439 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:59:47.194458 kernel: ACPI: Added _OSI(Module Device) May 9 23:59:47.194477 kernel: ACPI: Added _OSI(Processor Device) May 9 23:59:47.194496 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:59:47.194520 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:59:47.194540 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:59:47.194559 kernel: ACPI: Interpreter enabled May 9 23:59:47.194578 kernel: ACPI: Using GIC for interrupt routing May 9 23:59:47.194598 kernel: ACPI: MCFG table detected, 1 entries May 9 23:59:47.194617 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 9 23:59:47.194938 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:59:47.195157 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:59:47.195471 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:59:47.195690 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 9 23:59:47.195923 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 9 23:59:47.195952 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 9 23:59:47.195972 kernel: acpiphp: Slot [1] registered May 9 23:59:47.196009 kernel: acpiphp: Slot [2] registered May 9 23:59:47.196031 kernel: acpiphp: Slot [3] registered May 9 23:59:47.196050 kernel: acpiphp: Slot [4] registered May 9 23:59:47.196076 kernel: acpiphp: Slot [5] registered May 9 23:59:47.196095 kernel: acpiphp: Slot [6] registered May 9 23:59:47.196114 kernel: acpiphp: Slot [7] registered May 9 23:59:47.196133 kernel: acpiphp: Slot [8] registered May 9 23:59:47.196152 kernel: acpiphp: Slot [9] registered May 9 23:59:47.196171 kernel: acpiphp: Slot [10] registered May 9 23:59:47.196190 kernel: acpiphp: Slot [11] registered May 9 23:59:47.196210 kernel: acpiphp: Slot [12] registered May 9 23:59:47.196229 kernel: acpiphp: Slot [13] registered May 9 23:59:47.196248 kernel: acpiphp: Slot [14] registered May 9 23:59:47.196294 kernel: acpiphp: Slot [15] registered May 9 23:59:47.196314 kernel: acpiphp: Slot [16] registered May 9 23:59:47.196333 kernel: acpiphp: Slot [17] registered May 9 23:59:47.196352 kernel: acpiphp: Slot [18] registered May 9 23:59:47.196371 kernel: acpiphp: Slot [19] registered May 9 23:59:47.196391 kernel: acpiphp: Slot [20] registered May 9 23:59:47.196410 kernel: acpiphp: Slot [21] registered May 9 23:59:47.196429 kernel: acpiphp: Slot [22] registered May 9 23:59:47.196449 kernel: acpiphp: Slot [23] registered May 9 23:59:47.196510 kernel: acpiphp: Slot [24] registered May 9 23:59:47.196534 kernel: acpiphp: Slot [25] registered May 9 23:59:47.196554 kernel: acpiphp: Slot [26] registered May 9 23:59:47.196573 kernel: acpiphp: Slot [27] registered May 9 23:59:47.196593 kernel: acpiphp: Slot [28] registered May 9 23:59:47.196613 kernel: acpiphp: Slot [29] registered May 9 23:59:47.196632 kernel: acpiphp: Slot [30] registered May 9 23:59:47.196653 kernel: acpiphp: Slot [31] registered May 9 23:59:47.196672 kernel: PCI host bridge to bus 0000:00 May 9 23:59:47.196891 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 9 23:59:47.197091 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:59:47.197350 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 9 23:59:47.201403 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 9 23:59:47.201668 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 9 23:59:47.201897 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 9 23:59:47.202111 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 9 23:59:47.202424 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 9 23:59:47.202643 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 9 23:59:47.202850 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:59:47.203073 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 9 23:59:47.204389 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 9 23:59:47.204654 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 9 23:59:47.204881 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 9 23:59:47.205098 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:59:47.205357 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 9 23:59:47.205571 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 9 23:59:47.205781 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 9 23:59:47.205988 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 9 23:59:47.206207 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 9 23:59:47.207617 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 9 23:59:47.207818 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:59:47.208027 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 9 23:59:47.208057 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:59:47.208078 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:59:47.208099 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:59:47.208119 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:59:47.208138 kernel: iommu: Default domain type: Translated May 9 23:59:47.208158 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:59:47.208186 kernel: efivars: Registered efivars operations May 9 23:59:47.208206 kernel: vgaarb: loaded May 9 23:59:47.208226 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:59:47.208245 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:59:47.208710 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:59:47.208733 kernel: pnp: PnP ACPI init May 9 23:59:47.208981 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 9 23:59:47.209011 kernel: pnp: PnP ACPI: found 1 devices May 9 23:59:47.209038 kernel: NET: Registered PF_INET protocol family May 9 23:59:47.209059 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:59:47.209079 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:59:47.209098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:59:47.209118 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:59:47.209137 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:59:47.209157 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:59:47.209176 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:47.209196 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:47.209220 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:59:47.209239 kernel: PCI: CLS 0 bytes, default 64 May 9 23:59:47.210098 kernel: kvm [1]: HYP mode not available May 9 23:59:47.210144 kernel: Initialise system trusted keyrings May 9 23:59:47.210165 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:59:47.210186 kernel: Key type asymmetric registered May 9 23:59:47.210206 kernel: Asymmetric key parser 'x509' registered May 9 23:59:47.210228 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:59:47.210248 kernel: io scheduler mq-deadline registered May 9 23:59:47.210303 kernel: io scheduler kyber registered May 9 23:59:47.210323 kernel: io scheduler bfq registered May 9 23:59:47.210607 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 9 23:59:47.210637 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:59:47.210657 kernel: ACPI: button: Power Button [PWRB] May 9 23:59:47.210677 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 9 23:59:47.210696 kernel: ACPI: button: Sleep Button [SLPB] May 9 23:59:47.210715 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:59:47.210742 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 9 23:59:47.210956 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 9 23:59:47.210985 kernel: printk: console [ttyS0] disabled May 9 23:59:47.211006 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 9 23:59:47.211026 kernel: printk: console [ttyS0] enabled May 9 23:59:47.211046 kernel: printk: bootconsole [uart0] disabled May 9 23:59:47.211065 kernel: thunder_xcv, ver 1.0 May 9 23:59:47.211084 kernel: thunder_bgx, ver 1.0 May 9 23:59:47.211103 kernel: nicpf, ver 1.0 May 9 23:59:47.211127 kernel: nicvf, ver 1.0 May 9 23:59:47.211575 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:59:47.211779 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:59:46 UTC (1746835186) May 9 23:59:47.211806 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:59:47.211827 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 9 23:59:47.211847 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:59:47.211866 kernel: watchdog: Hard watchdog permanently disabled May 9 23:59:47.211886 kernel: NET: Registered PF_INET6 protocol family May 9 23:59:47.211912 kernel: Segment Routing with IPv6 May 9 23:59:47.211932 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:59:47.211952 kernel: NET: Registered PF_PACKET protocol family May 9 23:59:47.211971 kernel: Key type dns_resolver registered May 9 23:59:47.212010 kernel: registered taskstats version 1 May 9 23:59:47.212032 kernel: Loading compiled-in X.509 certificates May 9 23:59:47.212052 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:59:47.212071 kernel: Key type .fscrypt registered May 9 23:59:47.212090 kernel: Key type fscrypt-provisioning registered May 9 23:59:47.212115 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:59:47.212135 kernel: ima: Allocated hash algorithm: sha1 May 9 23:59:47.212154 kernel: ima: No architecture policies found May 9 23:59:47.212173 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:59:47.212192 kernel: clk: Disabling unused clocks May 9 23:59:47.212211 kernel: Freeing unused kernel memory: 39424K May 9 23:59:47.212230 kernel: Run /init as init process May 9 23:59:47.212249 kernel: with arguments: May 9 23:59:47.213158 kernel: /init May 9 23:59:47.213181 kernel: with environment: May 9 23:59:47.213211 kernel: HOME=/ May 9 23:59:47.213231 kernel: TERM=linux May 9 23:59:47.213250 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:59:47.213301 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:47.213356 systemd[1]: Detected virtualization amazon. May 9 23:59:47.213391 systemd[1]: Detected architecture arm64. May 9 23:59:47.213416 systemd[1]: Running in initrd. May 9 23:59:47.213444 systemd[1]: No hostname configured, using default hostname. May 9 23:59:47.213465 systemd[1]: Hostname set to . May 9 23:59:47.213488 systemd[1]: Initializing machine ID from VM UUID. May 9 23:59:47.213509 systemd[1]: Queued start job for default target initrd.target. May 9 23:59:47.213531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:47.213553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:47.213577 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:59:47.213598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:47.213625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:59:47.213647 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:59:47.213672 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:59:47.213694 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:59:47.213716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:47.213737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:47.213758 systemd[1]: Reached target paths.target - Path Units. May 9 23:59:47.213784 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:47.213805 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:47.213826 systemd[1]: Reached target timers.target - Timer Units. May 9 23:59:47.213847 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:47.213868 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:47.213890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:59:47.213911 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:59:47.213933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:47.213954 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:47.213980 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:47.214001 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:59:47.214022 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:59:47.214044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:47.214065 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:59:47.214086 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:59:47.214108 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:47.214129 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:47.214155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:47.214176 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:59:47.214198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:47.214313 systemd-journald[251]: Collecting audit messages is disabled. May 9 23:59:47.214364 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:59:47.214388 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:59:47.214410 systemd-journald[251]: Journal started May 9 23:59:47.214453 systemd-journald[251]: Runtime Journal (/run/log/journal/ec22b378b6b839302edf9bf0f81ee06b) is 8.0M, max 75.3M, 67.3M free. May 9 23:59:47.190956 systemd-modules-load[252]: Inserted module 'overlay' May 9 23:59:47.230191 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:59:47.232050 kernel: Bridge firewalling registered May 9 23:59:47.231062 systemd-modules-load[252]: Inserted module 'br_netfilter' May 9 23:59:47.235706 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:47.236414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:47.246528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:47.249029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:47.277980 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:47.283920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:47.298489 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:47.317695 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:47.326498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:47.332306 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:47.363804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:47.376151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:47.382308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:47.393525 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:59:47.421701 dracut-cmdline[289]: dracut-dracut-053 May 9 23:59:47.432126 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:47.471144 systemd-resolved[282]: Positive Trust Anchors: May 9 23:59:47.471184 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:59:47.471247 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:59:47.606291 kernel: SCSI subsystem initialized May 9 23:59:47.614288 kernel: Loading iSCSI transport class v2.0-870. May 9 23:59:47.626294 kernel: iscsi: registered transport (tcp) May 9 23:59:47.648501 kernel: iscsi: registered transport (qla4xxx) May 9 23:59:47.648596 kernel: QLogic iSCSI HBA Driver May 9 23:59:47.697295 kernel: random: crng init done May 9 23:59:47.697588 systemd-resolved[282]: Defaulting to hostname 'linux'. May 9 23:59:47.703034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:59:47.708077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:47.731752 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:59:47.743004 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:59:47.775592 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:59:47.775668 kernel: device-mapper: uevent: version 1.0.3 May 9 23:59:47.775697 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:59:47.842299 kernel: raid6: neonx8 gen() 6807 MB/s May 9 23:59:47.859286 kernel: raid6: neonx4 gen() 6613 MB/s May 9 23:59:47.876285 kernel: raid6: neonx2 gen() 5490 MB/s May 9 23:59:47.893285 kernel: raid6: neonx1 gen() 3971 MB/s May 9 23:59:47.910285 kernel: raid6: int64x8 gen() 3834 MB/s May 9 23:59:47.927285 kernel: raid6: int64x4 gen() 3726 MB/s May 9 23:59:47.944285 kernel: raid6: int64x2 gen() 3622 MB/s May 9 23:59:47.962033 kernel: raid6: int64x1 gen() 2778 MB/s May 9 23:59:47.962065 kernel: raid6: using algorithm neonx8 gen() 6807 MB/s May 9 23:59:47.980017 kernel: raid6: .... xor() 4862 MB/s, rmw enabled May 9 23:59:47.980054 kernel: raid6: using neon recovery algorithm May 9 23:59:47.987290 kernel: xor: measuring software checksum speed May 9 23:59:47.989415 kernel: 8regs : 10231 MB/sec May 9 23:59:47.989452 kernel: 32regs : 11912 MB/sec May 9 23:59:47.990572 kernel: arm64_neon : 9585 MB/sec May 9 23:59:47.990604 kernel: xor: using function: 32regs (11912 MB/sec) May 9 23:59:48.075309 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:59:48.094204 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:48.107560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:48.153487 systemd-udevd[470]: Using default interface naming scheme 'v255'. May 9 23:59:48.161711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:48.176040 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:59:48.205678 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 9 23:59:48.260612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:48.273661 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:48.386286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:48.405584 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:59:48.450665 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:59:48.456189 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:48.469839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:48.474810 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:48.493767 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:59:48.526507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:48.595337 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:59:48.595413 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 9 23:59:48.600679 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 9 23:59:48.601094 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 9 23:59:48.605527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:48.606066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:48.616574 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:89:69:34:5b:59 May 9 23:59:48.619399 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:48.622325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:48.622481 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:48.622604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:48.625429 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:48.657705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:48.672732 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 9 23:59:48.672795 kernel: nvme nvme0: pci function 0000:00:04.0 May 9 23:59:48.685293 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 9 23:59:48.697220 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:59:48.697314 kernel: GPT:9289727 != 16777215 May 9 23:59:48.697343 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:59:48.698753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:48.719397 kernel: GPT:9289727 != 16777215 May 9 23:59:48.719436 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:59:48.719463 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:48.712739 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:48.773629 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:48.819340 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (514) May 9 23:59:48.840305 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (513) May 9 23:59:48.915431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 9 23:59:48.939962 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 9 23:59:48.957050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 9 23:59:48.960161 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 9 23:59:48.977058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:59:49.000610 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:59:49.014543 disk-uuid[662]: Primary Header is updated. May 9 23:59:49.014543 disk-uuid[662]: Secondary Entries is updated. May 9 23:59:49.014543 disk-uuid[662]: Secondary Header is updated. May 9 23:59:49.027329 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:49.037297 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:50.041340 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:50.041502 disk-uuid[663]: The operation has completed successfully. May 9 23:59:50.226468 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:59:50.226693 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:59:50.272020 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:59:50.280190 sh[921]: Success May 9 23:59:50.296299 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:59:50.406456 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:59:50.426476 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:59:50.429981 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:59:50.471044 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:59:50.471107 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:50.471145 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:59:50.472403 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:59:50.473454 kernel: BTRFS info (device dm-0): using free space tree May 9 23:59:50.598275 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 23:59:50.619680 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:59:50.626242 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:59:50.640503 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:59:50.649856 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:59:50.674280 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:50.674348 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:50.675522 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:59:50.686292 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:59:50.704574 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:59:50.708983 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:50.718522 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:59:50.731345 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:59:50.843322 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:50.864336 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:50.908751 systemd-networkd[1113]: lo: Link UP May 9 23:59:50.908774 systemd-networkd[1113]: lo: Gained carrier May 9 23:59:50.911201 systemd-networkd[1113]: Enumeration completed May 9 23:59:50.911926 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:50.911933 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:59:50.912889 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:59:50.916232 systemd-networkd[1113]: eth0: Link UP May 9 23:59:50.916240 systemd-networkd[1113]: eth0: Gained carrier May 9 23:59:50.916277 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:50.919831 systemd[1]: Reached target network.target - Network. May 9 23:59:50.943349 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.18.167/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:59:51.109726 ignition[1022]: Ignition 2.19.0 May 9 23:59:51.109753 ignition[1022]: Stage: fetch-offline May 9 23:59:51.111622 ignition[1022]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.111647 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:51.117993 ignition[1022]: Ignition finished successfully May 9 23:59:51.121902 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:51.132577 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:59:51.165551 ignition[1123]: Ignition 2.19.0 May 9 23:59:51.165583 ignition[1123]: Stage: fetch May 9 23:59:51.166332 ignition[1123]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.166358 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:51.166510 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:51.180735 ignition[1123]: PUT result: OK May 9 23:59:51.184778 ignition[1123]: parsed url from cmdline: "" May 9 23:59:51.184794 ignition[1123]: no config URL provided May 9 23:59:51.184812 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:59:51.184837 ignition[1123]: no config at "/usr/lib/ignition/user.ign" May 9 23:59:51.184869 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:51.191518 ignition[1123]: PUT result: OK May 9 23:59:51.191592 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 9 23:59:51.195689 ignition[1123]: GET result: OK May 9 23:59:51.202666 unknown[1123]: fetched base config from "system" May 9 23:59:51.195857 ignition[1123]: parsing config with SHA512: 5d36c61a1e85dd95fc03c6922b433b160413878cdd8a47c43f17c039fa4f159358384938960eea791f3d9d73c921e79d713fd7eef1d1babc9c7af691b13190e1 May 9 23:59:51.202683 unknown[1123]: fetched base config from "system" May 9 23:59:51.203960 ignition[1123]: fetch: fetch complete May 9 23:59:51.202697 unknown[1123]: fetched user config from "aws" May 9 23:59:51.203994 ignition[1123]: fetch: fetch passed May 9 23:59:51.209480 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:59:51.204103 ignition[1123]: Ignition finished successfully May 9 23:59:51.232534 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:59:51.262079 ignition[1129]: Ignition 2.19.0 May 9 23:59:51.262110 ignition[1129]: Stage: kargs May 9 23:59:51.262831 ignition[1129]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.262856 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:51.262997 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:51.266221 ignition[1129]: PUT result: OK May 9 23:59:51.278062 ignition[1129]: kargs: kargs passed May 9 23:59:51.278373 ignition[1129]: Ignition finished successfully May 9 23:59:51.286512 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:59:51.301679 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:59:51.326369 ignition[1136]: Ignition 2.19.0 May 9 23:59:51.326388 ignition[1136]: Stage: disks May 9 23:59:51.327551 ignition[1136]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:51.327576 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:51.327734 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:51.332992 ignition[1136]: PUT result: OK May 9 23:59:51.342712 ignition[1136]: disks: disks passed May 9 23:59:51.342865 ignition[1136]: Ignition finished successfully May 9 23:59:51.348302 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:59:51.352935 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:59:51.355871 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:59:51.358769 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:51.361169 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:59:51.363600 systemd[1]: Reached target basic.target - Basic System. May 9 23:59:51.392628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:59:51.441028 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:59:51.446366 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:59:51.459499 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:59:51.539747 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:59:51.540767 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:59:51.546058 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:59:51.564500 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:51.573480 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:59:51.581049 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:59:51.581149 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:59:51.586689 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:51.609832 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:59:51.613524 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1163) May 9 23:59:51.619015 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:51.619062 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:51.620852 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:59:51.623786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:59:51.638304 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:59:51.641382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:52.058643 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:59:52.079629 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory May 9 23:59:52.089565 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:59:52.099602 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:59:52.390984 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:59:52.402656 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:59:52.404025 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:59:52.429588 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:52.429641 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:59:52.469381 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:59:52.481732 ignition[1276]: INFO : Ignition 2.19.0 May 9 23:59:52.481732 ignition[1276]: INFO : Stage: mount May 9 23:59:52.481732 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:52.481732 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:52.481732 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:52.494965 ignition[1276]: INFO : PUT result: OK May 9 23:59:52.502665 ignition[1276]: INFO : mount: mount passed May 9 23:59:52.504708 ignition[1276]: INFO : Ignition finished successfully May 9 23:59:52.509327 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:59:52.524453 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:59:52.552684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:52.577293 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1289) May 9 23:59:52.581050 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:52.581096 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:52.582241 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:59:52.587271 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:59:52.591163 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:52.638037 ignition[1306]: INFO : Ignition 2.19.0 May 9 23:59:52.641411 ignition[1306]: INFO : Stage: files May 9 23:59:52.641411 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:52.641411 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:52.641411 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:52.651155 ignition[1306]: INFO : PUT result: OK May 9 23:59:52.659547 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping May 9 23:59:52.662784 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:59:52.662784 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:59:52.673774 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:59:52.677331 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:59:52.680589 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:59:52.680378 unknown[1306]: wrote ssh authorized keys file for user: core May 9 23:59:52.694842 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:52.700637 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 23:59:52.741422 systemd-networkd[1113]: eth0: Gained IPv6LL May 9 23:59:52.794478 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:59:52.936197 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:52.936197 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:52.946340 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 23:59:53.341992 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 23:59:53.695120 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:53.695120 ignition[1306]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 23:59:53.705326 ignition[1306]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:53.705326 ignition[1306]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:53.705326 ignition[1306]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 23:59:53.705326 ignition[1306]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 9 23:59:53.705326 ignition[1306]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:59:53.705326 ignition[1306]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:53.705326 ignition[1306]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:53.705326 ignition[1306]: INFO : files: files passed May 9 23:59:53.705326 ignition[1306]: INFO : Ignition finished successfully May 9 23:59:53.736929 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:59:53.757710 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:59:53.765634 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:59:53.775483 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:59:53.775682 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:59:53.802552 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:53.802552 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:53.810960 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:53.817339 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:53.823003 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:59:53.842631 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:59:53.899320 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:59:53.901702 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:59:53.906478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:59:53.917352 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:59:53.919846 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:59:53.936563 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:59:53.974446 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:53.994669 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:59:54.025346 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:59:54.027303 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:59:54.032280 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:54.037232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:54.039670 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:59:54.041587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:59:54.041688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:54.044310 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:59:54.046485 systemd[1]: Stopped target basic.target - Basic System. May 9 23:59:54.048689 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:59:54.054710 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:54.055392 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:59:54.056105 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:59:54.058936 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:54.059637 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:59:54.062479 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:59:54.063151 systemd[1]: Stopped target swap.target - Swaps. May 9 23:59:54.063869 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:59:54.063976 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:54.067459 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:54.068121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:54.071653 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:59:54.127756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:54.131500 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:59:54.131593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:59:54.140637 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:59:54.140728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:54.144228 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:59:54.144327 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:59:54.169570 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:59:54.176430 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:59:54.178514 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:59:54.178624 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:54.190104 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:59:54.190223 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:54.222179 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:59:54.234308 ignition[1359]: INFO : Ignition 2.19.0 May 9 23:59:54.234308 ignition[1359]: INFO : Stage: umount May 9 23:59:54.241930 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:54.241930 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:54.241930 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:54.235031 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:59:54.235222 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:59:54.254396 ignition[1359]: INFO : PUT result: OK May 9 23:59:54.259626 ignition[1359]: INFO : umount: umount passed May 9 23:59:54.261660 ignition[1359]: INFO : Ignition finished successfully May 9 23:59:54.266124 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:59:54.268330 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:59:54.271833 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:59:54.271920 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:59:54.276216 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:59:54.276332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:59:54.287339 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:59:54.287423 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:59:54.289774 systemd[1]: Stopped target network.target - Network. May 9 23:59:54.291828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:59:54.291907 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:54.294707 systemd[1]: Stopped target paths.target - Path Units. May 9 23:59:54.296785 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:59:54.312565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:54.315310 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:59:54.321631 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:59:54.323603 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:59:54.323682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:54.325708 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:59:54.325780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:54.327864 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:59:54.327958 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:59:54.332031 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:59:54.332108 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:59:54.338394 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:59:54.338661 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:59:54.343075 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:59:54.345507 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:59:54.374329 systemd-networkd[1113]: eth0: DHCPv6 lease lost May 9 23:59:54.377571 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:59:54.377852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:59:54.391819 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:59:54.392337 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:59:54.402477 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:59:54.402598 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:54.417413 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:59:54.422748 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:59:54.422867 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:54.426494 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:54.426575 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:54.429406 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:59:54.429488 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:59:54.446277 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:59:54.446491 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:54.454690 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:54.487486 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:59:54.488745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:54.496238 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:59:54.496472 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:59:54.501925 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:59:54.502060 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:59:54.504616 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:59:54.504689 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:54.507191 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:59:54.507298 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:54.509978 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:59:54.510059 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:59:54.512573 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:54.512653 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:54.545777 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:59:54.554032 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:59:54.554146 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:54.557086 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 23:59:54.557167 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:54.560090 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:59:54.560168 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:54.563056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:54.563134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:54.598782 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:59:54.599140 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:59:54.608155 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:59:54.618586 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:59:54.638231 systemd[1]: Switching root. May 9 23:59:54.686617 systemd-journald[251]: Journal stopped May 9 23:59:57.133052 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). May 9 23:59:57.133198 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:59:57.133246 kernel: SELinux: policy capability open_perms=1 May 9 23:59:57.133301 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:59:57.133343 kernel: SELinux: policy capability always_check_network=0 May 9 23:59:57.133374 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:59:57.133404 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:59:57.133435 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:59:57.133472 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:59:57.133501 kernel: audit: type=1403 audit(1746835195.260:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:59:57.133541 systemd[1]: Successfully loaded SELinux policy in 48.884ms. May 9 23:59:57.133594 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.898ms. May 9 23:59:57.133629 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:57.133662 systemd[1]: Detected virtualization amazon. May 9 23:59:57.133694 systemd[1]: Detected architecture arm64. May 9 23:59:57.133726 systemd[1]: Detected first boot. May 9 23:59:57.133763 systemd[1]: Initializing machine ID from VM UUID. May 9 23:59:57.133795 zram_generator::config[1400]: No configuration found. May 9 23:59:57.133830 systemd[1]: Populated /etc with preset unit settings. May 9 23:59:57.133860 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:59:57.133892 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:59:57.133923 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:59:57.133956 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:59:57.133988 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:59:57.134018 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:59:57.134051 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:59:57.134083 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:59:57.134114 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:59:57.134144 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:59:57.134173 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:59:57.134204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:57.134238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:57.134628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:59:57.134668 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:59:57.134700 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:59:57.134730 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:57.134760 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 23:59:57.134791 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:57.134823 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:59:57.134854 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:59:57.134885 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:59:57.134920 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:59:57.134953 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:57.134984 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:57.135017 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:57.135049 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:57.135080 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:59:57.135111 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:59:57.135142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:57.135171 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:57.135205 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:57.135236 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:59:57.135299 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:59:57.135357 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:59:57.135391 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:59:57.135420 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:59:57.135450 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:59:57.135481 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:59:57.135512 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:59:57.135548 systemd[1]: Reached target machines.target - Containers. May 9 23:59:57.135579 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:59:57.135611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:57.135642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:57.135673 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:59:57.135705 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:57.135735 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:59:57.135775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:57.135809 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:59:57.135839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:57.135868 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:59:57.135898 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:59:57.135927 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:59:57.135976 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:59:57.136007 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:59:57.136036 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:57.136090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:57.136127 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:59:57.136158 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:59:57.136186 kernel: ACPI: bus type drm_connector registered May 9 23:59:57.136216 kernel: loop: module loaded May 9 23:59:57.136244 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:57.136291 kernel: fuse: init (API version 7.39) May 9 23:59:57.136324 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:59:57.136353 systemd[1]: Stopped verity-setup.service. May 9 23:59:57.136383 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:59:57.136417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:59:57.136448 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:59:57.136477 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:59:57.136506 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:59:57.136535 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:59:57.140645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:57.140712 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:59:57.140744 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:59:57.140777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:57.140847 systemd-journald[1485]: Collecting audit messages is disabled. May 9 23:59:57.140908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:57.140939 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:59:57.140973 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:59:57.141003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:57.141033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:57.141061 systemd-journald[1485]: Journal started May 9 23:59:57.141107 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec22b378b6b839302edf9bf0f81ee06b) is 8.0M, max 75.3M, 67.3M free. May 9 23:59:56.515012 systemd[1]: Queued start job for default target multi-user.target. May 9 23:59:56.575550 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 9 23:59:56.576379 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:59:57.148368 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:57.152654 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:59:57.154323 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:59:57.157765 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:57.158244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:57.161930 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:57.165841 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:59:57.169577 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:59:57.179070 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:59:57.206140 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:59:57.230602 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:59:57.246421 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:59:57.251378 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:59:57.251451 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:57.258503 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:59:57.270597 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:59:57.285180 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:59:57.291740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:57.317639 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:59:57.334630 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:59:57.340719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:59:57.342847 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:59:57.348901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:59:57.357638 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec22b378b6b839302edf9bf0f81ee06b is 114.655ms for 901 entries. May 9 23:59:57.357638 systemd-journald[1485]: System Journal (/var/log/journal/ec22b378b6b839302edf9bf0f81ee06b) is 8.0M, max 195.6M, 187.6M free. May 9 23:59:57.497904 systemd-journald[1485]: Received client request to flush runtime journal. May 9 23:59:57.497982 kernel: loop0: detected capacity change from 0 to 194096 May 9 23:59:57.359480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:57.369656 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:59:57.382647 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:59:57.393426 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:57.399005 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:59:57.404250 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:59:57.411494 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:59:57.417401 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:59:57.433525 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:59:57.446131 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:59:57.461531 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:59:57.508279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:59:57.535317 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:59:57.543078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:57.556537 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:59:57.559779 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:59:57.576850 udevadm[1537]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 23:59:57.588456 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. May 9 23:59:57.588496 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. May 9 23:59:57.605839 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:57.620327 kernel: loop1: detected capacity change from 0 to 114328 May 9 23:59:57.622488 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:59:57.690491 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:59:57.704691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:57.754373 kernel: loop2: detected capacity change from 0 to 114432 May 9 23:59:57.755779 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. May 9 23:59:57.755819 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. May 9 23:59:57.770609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:57.870478 kernel: loop3: detected capacity change from 0 to 52536 May 9 23:59:57.922637 kernel: loop4: detected capacity change from 0 to 194096 May 9 23:59:57.960421 kernel: loop5: detected capacity change from 0 to 114328 May 9 23:59:57.975300 kernel: loop6: detected capacity change from 0 to 114432 May 9 23:59:57.992728 kernel: loop7: detected capacity change from 0 to 52536 May 9 23:59:58.020575 (sd-merge)[1557]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 9 23:59:58.022161 (sd-merge)[1557]: Merged extensions into '/usr'. May 9 23:59:58.034702 systemd[1]: Reloading requested from client PID 1529 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:59:58.034757 systemd[1]: Reloading... May 9 23:59:58.217593 zram_generator::config[1583]: No configuration found. May 9 23:59:58.551643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:58.669520 systemd[1]: Reloading finished in 633 ms. May 9 23:59:58.708354 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:59:58.712925 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:59:58.728554 systemd[1]: Starting ensure-sysext.service... May 9 23:59:58.742375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:58.752723 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:58.763483 systemd[1]: Reloading requested from client PID 1635 ('systemctl') (unit ensure-sysext.service)... May 9 23:59:58.763519 systemd[1]: Reloading... May 9 23:59:58.837008 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:59:58.841502 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:59:58.844301 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:59:58.844836 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 9 23:59:58.844969 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 9 23:59:58.852189 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:59:58.853333 systemd-tmpfiles[1636]: Skipping /boot May 9 23:59:58.880654 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:59:58.881125 systemd-tmpfiles[1636]: Skipping /boot May 9 23:59:58.924698 systemd-udevd[1637]: Using default interface naming scheme 'v255'. May 9 23:59:58.957307 zram_generator::config[1667]: No configuration found. May 9 23:59:59.077271 ldconfig[1524]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:59:59.191518 (udev-worker)[1725]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:59.339057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:59.521290 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1725) May 9 23:59:59.527402 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 23:59:59.528878 systemd[1]: Reloading finished in 764 ms. May 9 23:59:59.578428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:59.584244 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:59:59.597441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:59.672609 systemd[1]: Finished ensure-sysext.service. May 9 23:59:59.710638 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:59:59.729520 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:59:59.734685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:59.737875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:59.757158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:59:59.763554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:59.782558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:59.787685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:59.792735 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:59:59.810052 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:59.826717 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:59.831116 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:59:59.847641 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:59:59.861954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:59.869231 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:59:59.875412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:59.877204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:59.882699 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:59:59.884424 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:59:59.889856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:59.891407 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:59.897080 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:59.898531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:59.935712 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:59:59.938342 augenrules[1864]: No rules May 9 23:59:59.945922 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:59:59.951148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:59:59.965176 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:59:59.973810 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:59:59.974156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:59:59.974474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:59:59.978700 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:59:59.996924 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 00:00:00.003127 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 00:00:00.019285 lvm[1872]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:00:00.070370 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 00:00:00.071377 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 10 00:00:00.072753 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:00:00.090089 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 10 00:00:00.101168 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 00:00:00.101966 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 00:00:00.103179 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:00:00.125183 lvm[1881]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:00:00.150428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:00:00.159433 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 00:00:00.174379 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 10 00:00:00.267777 systemd-networkd[1846]: lo: Link UP May 10 00:00:00.267801 systemd-networkd[1846]: lo: Gained carrier May 10 00:00:00.270662 systemd-networkd[1846]: Enumeration completed May 10 00:00:00.270861 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:00:00.271782 systemd-networkd[1846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:00:00.271791 systemd-networkd[1846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:00:00.276507 systemd-networkd[1846]: eth0: Link UP May 10 00:00:00.276854 systemd-networkd[1846]: eth0: Gained carrier May 10 00:00:00.276888 systemd-networkd[1846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:00:00.286626 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 00:00:00.286733 systemd-resolved[1848]: Positive Trust Anchors: May 10 00:00:00.286755 systemd-resolved[1848]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:00:00.286819 systemd-resolved[1848]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:00:00.294427 systemd-networkd[1846]: eth0: DHCPv4 address 172.31.18.167/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 10 00:00:00.295652 systemd-resolved[1848]: Defaulting to hostname 'linux'. May 10 00:00:00.299309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:00:00.305796 systemd[1]: Reached target network.target - Network. May 10 00:00:00.308072 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:00:00.310829 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:00:00.313243 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 00:00:00.316454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 00:00:00.319871 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 00:00:00.322503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 00:00:00.325481 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 00:00:00.328215 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:00:00.328382 systemd[1]: Reached target paths.target - Path Units. May 10 00:00:00.330582 systemd[1]: Reached target timers.target - Timer Units. May 10 00:00:00.334005 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 00:00:00.338968 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 00:00:00.348745 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 00:00:00.352909 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 00:00:00.355950 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:00:00.358513 systemd[1]: Reached target basic.target - Basic System. May 10 00:00:00.360826 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 00:00:00.360878 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 00:00:00.368585 systemd[1]: Starting containerd.service - containerd container runtime... May 10 00:00:00.378619 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 00:00:00.385698 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 00:00:00.391585 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 00:00:00.404588 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 00:00:00.407898 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 00:00:00.412650 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 00:00:00.421834 systemd[1]: Started ntpd.service - Network Time Service. May 10 00:00:00.430433 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 00:00:00.436687 jq[1900]: false May 10 00:00:00.453502 systemd[1]: Starting setup-oem.service - Setup OEM... May 10 00:00:00.461474 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 00:00:00.471595 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 00:00:00.485926 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 00:00:00.491518 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:00:00.492466 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:00:00.498627 systemd[1]: Starting update-engine.service - Update Engine... May 10 00:00:00.516460 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 00:00:00.532220 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:00:00.535421 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 00:00:00.559156 update_engine[1911]: I20250510 00:00:00.558971 1911 main.cc:92] Flatcar Update Engine starting May 10 00:00:00.607885 jq[1912]: true May 10 00:00:00.608484 ntpd[1903]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: ---------------------------------------------------- May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: ntp-4 is maintained by Network Time Foundation, May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: corporation. Support and training for ntp-4 are May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: available at https://www.nwtime.org/support May 10 00:00:00.610780 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: ---------------------------------------------------- May 10 00:00:00.609423 ntpd[1903]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 10 00:00:00.609444 ntpd[1903]: ---------------------------------------------------- May 10 00:00:00.630315 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:00:00.639746 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: proto: precision = 0.096 usec (-23) May 10 00:00:00.639746 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: basedate set to 2025-04-27 May 10 00:00:00.639746 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: gps base set to 2025-04-27 (week 2364) May 10 00:00:00.609462 ntpd[1903]: ntp-4 is maintained by Network Time Foundation, May 10 00:00:00.633092 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 00:00:00.609480 ntpd[1903]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 10 00:00:00.609499 ntpd[1903]: corporation. Support and training for ntp-4 are May 10 00:00:00.609517 ntpd[1903]: available at https://www.nwtime.org/support May 10 00:00:00.609535 ntpd[1903]: ---------------------------------------------------- May 10 00:00:00.626898 ntpd[1903]: proto: precision = 0.096 usec (-23) May 10 00:00:00.636964 ntpd[1903]: basedate set to 2025-04-27 May 10 00:00:00.637000 ntpd[1903]: gps base set to 2025-04-27 (week 2364) May 10 00:00:00.651166 tar[1916]: linux-arm64/helm May 10 00:00:00.656012 dbus-daemon[1899]: [system] SELinux support is enabled May 10 00:00:00.663877 dbus-daemon[1899]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1846 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 00:00:00.667012 update_engine[1911]: I20250510 00:00:00.666024 1911 update_check_scheduler.cc:74] Next update check in 11m57s May 10 00:00:00.667105 extend-filesystems[1901]: Found loop4 May 10 00:00:00.667105 extend-filesystems[1901]: Found loop5 May 10 00:00:00.667105 extend-filesystems[1901]: Found loop6 May 10 00:00:00.667105 extend-filesystems[1901]: Found loop7 May 10 00:00:00.667105 extend-filesystems[1901]: Found nvme0n1 May 10 00:00:00.667105 extend-filesystems[1901]: Found nvme0n1p1 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Listen and drop on 0 v6wildcard [::]:123 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Listen normally on 2 lo 127.0.0.1:123 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Listen normally on 3 eth0 172.31.18.167:123 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Listen normally on 4 lo [::1]:123 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: bind(21) AF_INET6 fe80::489:69ff:fe34:5b59%2#123 flags 0x11 failed: Cannot assign requested address May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: unable to create socket on eth0 (5) for fe80::489:69ff:fe34:5b59%2#123 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: failed to init interface for address fe80::489:69ff:fe34:5b59%2 May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: Listening on routing socket on fd #21 for interface updates May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 10 00:00:00.723405 ntpd[1903]: 10 May 00:00:00 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 10 00:00:00.665037 ntpd[1903]: Listen and drop on 0 v6wildcard [::]:123 May 10 00:00:00.724008 jq[1928]: true May 10 00:00:00.684127 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 00:00:00.729412 extend-filesystems[1901]: Found nvme0n1p2 May 10 00:00:00.729412 extend-filesystems[1901]: Found nvme0n1p3 May 10 00:00:00.729412 extend-filesystems[1901]: Found usr May 10 00:00:00.729412 extend-filesystems[1901]: Found nvme0n1p4 May 10 00:00:00.729412 extend-filesystems[1901]: Found nvme0n1p6 May 10 00:00:00.729412 extend-filesystems[1901]: Found nvme0n1p7 May 10 00:00:00.729412 extend-filesystems[1901]: Found nvme0n1p9 May 10 00:00:00.729412 extend-filesystems[1901]: Checking size of /dev/nvme0n1p9 May 10 00:00:00.665121 ntpd[1903]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 10 00:00:00.698218 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:00:00.770488 extend-filesystems[1901]: Resized partition /dev/nvme0n1p9 May 10 00:00:00.667493 ntpd[1903]: Listen normally on 2 lo 127.0.0.1:123 May 10 00:00:00.699308 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 00:00:00.667565 ntpd[1903]: Listen normally on 3 eth0 172.31.18.167:123 May 10 00:00:00.705324 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:00:00.667631 ntpd[1903]: Listen normally on 4 lo [::1]:123 May 10 00:00:00.705364 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 00:00:00.667708 ntpd[1903]: bind(21) AF_INET6 fe80::489:69ff:fe34:5b59%2#123 flags 0x11 failed: Cannot assign requested address May 10 00:00:00.727825 (ntainerd)[1935]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 00:00:00.667748 ntpd[1903]: unable to create socket on eth0 (5) for fe80::489:69ff:fe34:5b59%2#123 May 10 00:00:00.736910 systemd[1]: Started update-engine.service - Update Engine. May 10 00:00:00.667776 ntpd[1903]: failed to init interface for address fe80::489:69ff:fe34:5b59%2 May 10 00:00:00.746066 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:00:00.667831 ntpd[1903]: Listening on routing socket on fd #21 for interface updates May 10 00:00:00.746437 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 00:00:00.708771 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 10 00:00:00.780594 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 10 00:00:00.708820 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 10 00:00:00.726050 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 00:00:00.788299 extend-filesystems[1949]: resize2fs 1.47.1 (20-May-2024) May 10 00:00:00.791535 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 00:00:00.819307 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 10 00:00:00.881533 systemd[1]: Finished setup-oem.service - Setup OEM. May 10 00:00:00.892336 coreos-metadata[1898]: May 10 00:00:00.889 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 10 00:00:00.905584 coreos-metadata[1898]: May 10 00:00:00.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 10 00:00:00.914624 coreos-metadata[1898]: May 10 00:00:00.914 INFO Fetch successful May 10 00:00:00.914624 coreos-metadata[1898]: May 10 00:00:00.914 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 10 00:00:00.918340 coreos-metadata[1898]: May 10 00:00:00.915 INFO Fetch successful May 10 00:00:00.918340 coreos-metadata[1898]: May 10 00:00:00.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 10 00:00:00.921350 coreos-metadata[1898]: May 10 00:00:00.921 INFO Fetch successful May 10 00:00:00.921350 coreos-metadata[1898]: May 10 00:00:00.921 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 10 00:00:00.928474 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 10 00:00:00.930502 coreos-metadata[1898]: May 10 00:00:00.930 INFO Fetch successful May 10 00:00:00.930502 coreos-metadata[1898]: May 10 00:00:00.930 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 10 00:00:00.932581 coreos-metadata[1898]: May 10 00:00:00.932 INFO Fetch failed with 404: resource not found May 10 00:00:00.932581 coreos-metadata[1898]: May 10 00:00:00.932 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 10 00:00:00.942554 coreos-metadata[1898]: May 10 00:00:00.933 INFO Fetch successful May 10 00:00:00.942554 coreos-metadata[1898]: May 10 00:00:00.940 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 10 00:00:00.942554 coreos-metadata[1898]: May 10 00:00:00.940 INFO Fetch successful May 10 00:00:00.942554 coreos-metadata[1898]: May 10 00:00:00.940 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 10 00:00:00.942554 coreos-metadata[1898]: May 10 00:00:00.942 INFO Fetch successful May 10 00:00:00.942554 coreos-metadata[1898]: May 10 00:00:00.942 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 10 00:00:00.946206 coreos-metadata[1898]: May 10 00:00:00.942 INFO Fetch successful May 10 00:00:00.946206 coreos-metadata[1898]: May 10 00:00:00.942 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 10 00:00:00.946206 coreos-metadata[1898]: May 10 00:00:00.943 INFO Fetch successful May 10 00:00:00.946406 extend-filesystems[1949]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 10 00:00:00.946406 extend-filesystems[1949]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 00:00:00.946406 extend-filesystems[1949]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 10 00:00:00.962660 extend-filesystems[1901]: Resized filesystem in /dev/nvme0n1p9 May 10 00:00:00.962903 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:00:00.963287 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 00:00:01.036339 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1720) May 10 00:00:01.047180 systemd-logind[1910]: Watching system buttons on /dev/input/event0 (Power Button) May 10 00:00:01.047234 systemd-logind[1910]: Watching system buttons on /dev/input/event1 (Sleep Button) May 10 00:00:01.051868 systemd-logind[1910]: New seat seat0. May 10 00:00:01.055024 systemd[1]: Started systemd-logind.service - User Login Management. May 10 00:00:01.095533 bash[1979]: Updated "/home/core/.ssh/authorized_keys" May 10 00:00:01.095355 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 00:00:01.101201 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 00:00:01.109523 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 00:00:01.145147 systemd[1]: Starting sshkeys.service... May 10 00:00:01.211543 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 10 00:00:01.280551 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 10 00:00:01.294867 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 00:00:01.297616 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 10 00:00:01.302608 dbus-daemon[1899]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1950 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 00:00:01.317896 systemd[1]: Starting polkit.service - Authorization Manager... May 10 00:00:01.424243 polkitd[2026]: Started polkitd version 121 May 10 00:00:01.466105 polkitd[2026]: Loading rules from directory /etc/polkit-1/rules.d May 10 00:00:01.466219 polkitd[2026]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 00:00:01.480487 locksmithd[1951]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:00:01.481416 polkitd[2026]: Finished loading, compiling and executing 2 rules May 10 00:00:01.486655 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 00:00:01.486924 systemd[1]: Started polkit.service - Authorization Manager. May 10 00:00:01.489324 polkitd[2026]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 00:00:01.561350 systemd-resolved[1848]: System hostname changed to 'ip-172-31-18-167'. May 10 00:00:01.561493 systemd-hostnamed[1950]: Hostname set to (transient) May 10 00:00:01.610058 ntpd[1903]: bind(24) AF_INET6 fe80::489:69ff:fe34:5b59%2#123 flags 0x11 failed: Cannot assign requested address May 10 00:00:01.610747 ntpd[1903]: 10 May 00:00:01 ntpd[1903]: bind(24) AF_INET6 fe80::489:69ff:fe34:5b59%2#123 flags 0x11 failed: Cannot assign requested address May 10 00:00:01.610747 ntpd[1903]: 10 May 00:00:01 ntpd[1903]: unable to create socket on eth0 (6) for fe80::489:69ff:fe34:5b59%2#123 May 10 00:00:01.610747 ntpd[1903]: 10 May 00:00:01 ntpd[1903]: failed to init interface for address fe80::489:69ff:fe34:5b59%2 May 10 00:00:01.610115 ntpd[1903]: unable to create socket on eth0 (6) for fe80::489:69ff:fe34:5b59%2#123 May 10 00:00:01.610144 ntpd[1903]: failed to init interface for address fe80::489:69ff:fe34:5b59%2 May 10 00:00:01.625386 containerd[1935]: time="2025-05-10T00:00:01.623614895Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 10 00:00:01.648569 coreos-metadata[2010]: May 10 00:00:01.648 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 10 00:00:01.663866 coreos-metadata[2010]: May 10 00:00:01.660 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 10 00:00:01.664885 coreos-metadata[2010]: May 10 00:00:01.664 INFO Fetch successful May 10 00:00:01.664885 coreos-metadata[2010]: May 10 00:00:01.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 00:00:01.672983 coreos-metadata[2010]: May 10 00:00:01.671 INFO Fetch successful May 10 00:00:01.674475 unknown[2010]: wrote ssh authorized keys file for user: core May 10 00:00:01.752353 update-ssh-keys[2086]: Updated "/home/core/.ssh/authorized_keys" May 10 00:00:01.762330 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 10 00:00:01.772343 systemd[1]: Finished sshkeys.service. May 10 00:00:01.864548 containerd[1935]: time="2025-05-10T00:00:01.863771688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:01.870758 containerd[1935]: time="2025-05-10T00:00:01.869755824Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:01.870758 containerd[1935]: time="2025-05-10T00:00:01.869824368Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:00:01.870758 containerd[1935]: time="2025-05-10T00:00:01.869861460Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:00:01.870758 containerd[1935]: time="2025-05-10T00:00:01.870151896Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 10 00:00:01.870758 containerd[1935]: time="2025-05-10T00:00:01.870184344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 10 00:00:01.871034 containerd[1935]: time="2025-05-10T00:00:01.870854784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:01.871034 containerd[1935]: time="2025-05-10T00:00:01.870891936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:01.872454 containerd[1935]: time="2025-05-10T00:00:01.871770864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:01.872454 containerd[1935]: time="2025-05-10T00:00:01.871820112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:00:01.872454 containerd[1935]: time="2025-05-10T00:00:01.871855536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:01.872454 containerd[1935]: time="2025-05-10T00:00:01.871880376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:00:01.872454 containerd[1935]: time="2025-05-10T00:00:01.872097024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:01.873459 containerd[1935]: time="2025-05-10T00:00:01.873077100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:00:01.876129 containerd[1935]: time="2025-05-10T00:00:01.873964056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:00:01.876129 containerd[1935]: time="2025-05-10T00:00:01.874012812Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:00:01.876129 containerd[1935]: time="2025-05-10T00:00:01.874208196Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:00:01.876129 containerd[1935]: time="2025-05-10T00:00:01.874340640Z" level=info msg="metadata content store policy set" policy=shared May 10 00:00:01.881390 containerd[1935]: time="2025-05-10T00:00:01.881299500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:00:01.881523 containerd[1935]: time="2025-05-10T00:00:01.881435868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:00:01.882878 containerd[1935]: time="2025-05-10T00:00:01.882828204Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 10 00:00:01.882953 containerd[1935]: time="2025-05-10T00:00:01.882898764Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 10 00:00:01.882953 containerd[1935]: time="2025-05-10T00:00:01.882935112Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:00:01.883245 containerd[1935]: time="2025-05-10T00:00:01.883204512Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:00:01.883948 containerd[1935]: time="2025-05-10T00:00:01.883894224Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:00:01.884158 containerd[1935]: time="2025-05-10T00:00:01.884118972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 10 00:00:01.884213 containerd[1935]: time="2025-05-10T00:00:01.884163600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 10 00:00:01.884213 containerd[1935]: time="2025-05-10T00:00:01.884201232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 10 00:00:01.884903 containerd[1935]: time="2025-05-10T00:00:01.884245296Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:00:01.884960 containerd[1935]: time="2025-05-10T00:00:01.884915940Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:00:01.885008 containerd[1935]: time="2025-05-10T00:00:01.884954100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:00:01.885008 containerd[1935]: time="2025-05-10T00:00:01.884986800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:00:01.885112 containerd[1935]: time="2025-05-10T00:00:01.885018696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:00:01.885112 containerd[1935]: time="2025-05-10T00:00:01.885049368Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:00:01.885112 containerd[1935]: time="2025-05-10T00:00:01.885081804Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:00:01.885231 containerd[1935]: time="2025-05-10T00:00:01.885109536Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:00:01.885231 containerd[1935]: time="2025-05-10T00:00:01.885149568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885231 containerd[1935]: time="2025-05-10T00:00:01.885179988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885231 containerd[1935]: time="2025-05-10T00:00:01.885209496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885459 containerd[1935]: time="2025-05-10T00:00:01.885241080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885459 containerd[1935]: time="2025-05-10T00:00:01.885310596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885459 containerd[1935]: time="2025-05-10T00:00:01.885355716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885459 containerd[1935]: time="2025-05-10T00:00:01.885387864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885459 containerd[1935]: time="2025-05-10T00:00:01.885419916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885459 containerd[1935]: time="2025-05-10T00:00:01.885450840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885703 containerd[1935]: time="2025-05-10T00:00:01.885484176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885703 containerd[1935]: time="2025-05-10T00:00:01.885512964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885703 containerd[1935]: time="2025-05-10T00:00:01.885543948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885703 containerd[1935]: time="2025-05-10T00:00:01.885573012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885703 containerd[1935]: time="2025-05-10T00:00:01.885606348Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 10 00:00:01.885703 containerd[1935]: time="2025-05-10T00:00:01.885648720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885703 containerd[1935]: time="2025-05-10T00:00:01.885676896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:00:01.885987 containerd[1935]: time="2025-05-10T00:00:01.885703152Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.886773912Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.887499036Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.887531916Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.887561196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.887585448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.887614524Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.887637864Z" level=info msg="NRI interface is disabled by configuration." May 10 00:00:01.889293 containerd[1935]: time="2025-05-10T00:00:01.887665560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:00:01.889716 containerd[1935]: time="2025-05-10T00:00:01.888173136Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:00:01.889716 containerd[1935]: time="2025-05-10T00:00:01.888807996Z" level=info msg="Connect containerd service" May 10 00:00:01.889716 containerd[1935]: time="2025-05-10T00:00:01.888864192Z" level=info msg="using legacy CRI server" May 10 00:00:01.889716 containerd[1935]: time="2025-05-10T00:00:01.888882288Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 00:00:01.889716 containerd[1935]: time="2025-05-10T00:00:01.889028496Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:00:01.891351 containerd[1935]: time="2025-05-10T00:00:01.891218484Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:00:01.893376 containerd[1935]: time="2025-05-10T00:00:01.893295048Z" level=info msg="Start subscribing containerd event" May 10 00:00:01.893446 containerd[1935]: time="2025-05-10T00:00:01.893404068Z" level=info msg="Start recovering state" May 10 00:00:01.894119 containerd[1935]: time="2025-05-10T00:00:01.894047748Z" level=info msg="Start event monitor" May 10 00:00:01.894119 containerd[1935]: time="2025-05-10T00:00:01.894107964Z" level=info msg="Start snapshots syncer" May 10 00:00:01.894317 containerd[1935]: time="2025-05-10T00:00:01.894134364Z" level=info msg="Start cni network conf syncer for default" May 10 00:00:01.894317 containerd[1935]: time="2025-05-10T00:00:01.894155160Z" level=info msg="Start streaming server" May 10 00:00:01.897780 containerd[1935]: time="2025-05-10T00:00:01.896470680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:00:01.897780 containerd[1935]: time="2025-05-10T00:00:01.896624448Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:00:01.897780 containerd[1935]: time="2025-05-10T00:00:01.896744544Z" level=info msg="containerd successfully booted in 0.278875s" May 10 00:00:01.896868 systemd[1]: Started containerd.service - containerd container runtime. May 10 00:00:02.021452 systemd-networkd[1846]: eth0: Gained IPv6LL May 10 00:00:02.028099 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 00:00:02.034381 systemd[1]: Reached target network-online.target - Network is Online. May 10 00:00:02.051744 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 10 00:00:02.066815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:02.079375 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 00:00:02.188319 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 00:00:02.250120 amazon-ssm-agent[2104]: Initializing new seelog logger May 10 00:00:02.249952 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 00:00:02.257389 amazon-ssm-agent[2104]: New Seelog Logger Creation Complete May 10 00:00:02.257389 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.257389 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.257389 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 processing appconfig overrides May 10 00:00:02.262696 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.262696 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.262850 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 processing appconfig overrides May 10 00:00:02.263146 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.263146 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.263304 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 processing appconfig overrides May 10 00:00:02.264303 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO Proxy environment variables: May 10 00:00:02.266307 sshd_keygen[1938]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:00:02.274026 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.274026 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 10 00:00:02.274026 amazon-ssm-agent[2104]: 2025/05/10 00:00:02 processing appconfig overrides May 10 00:00:02.356529 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 00:00:02.365001 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO no_proxy: May 10 00:00:02.374342 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 00:00:02.393767 systemd[1]: Started sshd@0-172.31.18.167:22-147.75.109.163:37266.service - OpenSSH per-connection server daemon (147.75.109.163:37266). May 10 00:00:02.456779 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:00:02.458359 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 00:00:02.474010 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO https_proxy: May 10 00:00:02.473854 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 00:00:02.540023 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 00:00:02.542781 tar[1916]: linux-arm64/LICENSE May 10 00:00:02.543483 tar[1916]: linux-arm64/README.md May 10 00:00:02.567717 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO http_proxy: May 10 00:00:02.566868 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 00:00:02.580870 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 10 00:00:02.585843 systemd[1]: Reached target getty.target - Login Prompts. May 10 00:00:02.592551 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 00:00:02.665624 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO Checking if agent identity type OnPrem can be assumed May 10 00:00:02.676073 sshd[2130]: Accepted publickey for core from 147.75.109.163 port 37266 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:02.680171 sshd[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:02.697753 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 00:00:02.713738 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 00:00:02.726933 systemd-logind[1910]: New session 1 of user core. May 10 00:00:02.757665 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 00:00:02.766938 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO Checking if agent identity type EC2 can be assumed May 10 00:00:02.773724 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 00:00:02.793932 (systemd)[2145]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:00:02.865417 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO Agent will take identity from EC2 May 10 00:00:02.966389 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [amazon-ssm-agent] using named pipe channel for IPC May 10 00:00:03.062445 systemd[2145]: Queued start job for default target default.target. May 10 00:00:03.065694 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [amazon-ssm-agent] using named pipe channel for IPC May 10 00:00:03.069904 systemd[2145]: Created slice app.slice - User Application Slice. May 10 00:00:03.069965 systemd[2145]: Reached target paths.target - Paths. May 10 00:00:03.069998 systemd[2145]: Reached target timers.target - Timers. May 10 00:00:03.074515 systemd[2145]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 00:00:03.096719 systemd[2145]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 00:00:03.097517 systemd[2145]: Reached target sockets.target - Sockets. May 10 00:00:03.097559 systemd[2145]: Reached target basic.target - Basic System. May 10 00:00:03.097653 systemd[2145]: Reached target default.target - Main User Target. May 10 00:00:03.097726 systemd[2145]: Startup finished in 291ms. May 10 00:00:03.097842 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 00:00:03.111568 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 00:00:03.165758 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [amazon-ssm-agent] using named pipe channel for IPC May 10 00:00:03.269505 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 10 00:00:03.290763 systemd[1]: Started sshd@1-172.31.18.167:22-147.75.109.163:37280.service - OpenSSH per-connection server daemon (147.75.109.163:37280). May 10 00:00:03.370005 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 10 00:00:03.471110 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [amazon-ssm-agent] Starting Core Agent May 10 00:00:03.505270 sshd[2158]: Accepted publickey for core from 147.75.109.163 port 37280 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:03.508831 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:03.519644 systemd-logind[1910]: New session 2 of user core. May 10 00:00:03.526868 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 00:00:03.571339 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 10 00:00:03.671095 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [Registrar] Starting registrar module May 10 00:00:03.672189 sshd[2158]: pam_unix(sshd:session): session closed for user core May 10 00:00:03.680165 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:00:03.681223 systemd[1]: sshd@1-172.31.18.167:22-147.75.109.163:37280.service: Deactivated successfully. May 10 00:00:03.693395 systemd-logind[1910]: Session 2 logged out. Waiting for processes to exit. May 10 00:00:03.715442 systemd[1]: Started sshd@2-172.31.18.167:22-147.75.109.163:37290.service - OpenSSH per-connection server daemon (147.75.109.163:37290). May 10 00:00:03.722764 systemd-logind[1910]: Removed session 2. May 10 00:00:03.771351 amazon-ssm-agent[2104]: 2025-05-10 00:00:02 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 10 00:00:03.923943 sshd[2165]: Accepted publickey for core from 147.75.109.163 port 37290 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:03.923333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:03.928918 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 00:00:03.936268 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:03.936385 systemd[1]: Startup finished in 1.142s (kernel) + 8.464s (initrd) + 8.722s (userspace) = 18.330s. May 10 00:00:03.945966 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:03.963951 systemd-logind[1910]: New session 3 of user core. May 10 00:00:03.967561 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 00:00:04.106416 sshd[2165]: pam_unix(sshd:session): session closed for user core May 10 00:00:04.113942 systemd[1]: sshd@2-172.31.18.167:22-147.75.109.163:37290.service: Deactivated successfully. May 10 00:00:04.118950 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:00:04.127561 systemd-logind[1910]: Session 3 logged out. Waiting for processes to exit. May 10 00:00:04.131318 systemd-logind[1910]: Removed session 3. May 10 00:00:04.308125 amazon-ssm-agent[2104]: 2025-05-10 00:00:04 INFO [EC2Identity] EC2 registration was successful. May 10 00:00:04.340652 amazon-ssm-agent[2104]: 2025-05-10 00:00:04 INFO [CredentialRefresher] credentialRefresher has started May 10 00:00:04.340652 amazon-ssm-agent[2104]: 2025-05-10 00:00:04 INFO [CredentialRefresher] Starting credentials refresher loop May 10 00:00:04.340652 amazon-ssm-agent[2104]: 2025-05-10 00:00:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 10 00:00:04.408151 amazon-ssm-agent[2104]: 2025-05-10 00:00:04 INFO [CredentialRefresher] Next credential rotation will be in 31.116658598066667 minutes May 10 00:00:04.610077 ntpd[1903]: Listen normally on 7 eth0 [fe80::489:69ff:fe34:5b59%2]:123 May 10 00:00:04.610734 ntpd[1903]: 10 May 00:00:04 ntpd[1903]: Listen normally on 7 eth0 [fe80::489:69ff:fe34:5b59%2]:123 May 10 00:00:04.966766 kubelet[2172]: E0510 00:00:04.966593 2172 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:04.971422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:04.972151 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:04.972904 systemd[1]: kubelet.service: Consumed 1.307s CPU time. May 10 00:00:05.368385 amazon-ssm-agent[2104]: 2025-05-10 00:00:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 10 00:00:05.469687 amazon-ssm-agent[2104]: 2025-05-10 00:00:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2189) started May 10 00:00:05.570735 amazon-ssm-agent[2104]: 2025-05-10 00:00:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 10 00:00:07.802847 systemd-resolved[1848]: Clock change detected. Flushing caches. May 10 00:00:14.342223 systemd[1]: Started sshd@3-172.31.18.167:22-147.75.109.163:56286.service - OpenSSH per-connection server daemon (147.75.109.163:56286). May 10 00:00:14.507001 sshd[2200]: Accepted publickey for core from 147.75.109.163 port 56286 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:14.509552 sshd[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:14.517597 systemd-logind[1910]: New session 4 of user core. May 10 00:00:14.524015 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 00:00:14.651122 sshd[2200]: pam_unix(sshd:session): session closed for user core May 10 00:00:14.658334 systemd-logind[1910]: Session 4 logged out. Waiting for processes to exit. May 10 00:00:14.658653 systemd[1]: sshd@3-172.31.18.167:22-147.75.109.163:56286.service: Deactivated successfully. May 10 00:00:14.661738 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:00:14.663664 systemd-logind[1910]: Removed session 4. May 10 00:00:14.690316 systemd[1]: Started sshd@4-172.31.18.167:22-147.75.109.163:56294.service - OpenSSH per-connection server daemon (147.75.109.163:56294). May 10 00:00:14.855369 sshd[2207]: Accepted publickey for core from 147.75.109.163 port 56294 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:14.857945 sshd[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:14.867863 systemd-logind[1910]: New session 5 of user core. May 10 00:00:14.874035 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 00:00:14.991163 sshd[2207]: pam_unix(sshd:session): session closed for user core May 10 00:00:14.996136 systemd-logind[1910]: Session 5 logged out. Waiting for processes to exit. May 10 00:00:14.997263 systemd[1]: sshd@4-172.31.18.167:22-147.75.109.163:56294.service: Deactivated successfully. May 10 00:00:15.000985 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:00:15.004828 systemd-logind[1910]: Removed session 5. May 10 00:00:15.030290 systemd[1]: Started sshd@5-172.31.18.167:22-147.75.109.163:56304.service - OpenSSH per-connection server daemon (147.75.109.163:56304). May 10 00:00:15.165653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:00:15.178149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:15.196235 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 56304 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:15.199149 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.208099 systemd-logind[1910]: New session 6 of user core. May 10 00:00:15.218079 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 00:00:15.352085 sshd[2214]: pam_unix(sshd:session): session closed for user core May 10 00:00:15.360658 systemd[1]: sshd@5-172.31.18.167:22-147.75.109.163:56304.service: Deactivated successfully. May 10 00:00:15.361025 systemd-logind[1910]: Session 6 logged out. Waiting for processes to exit. May 10 00:00:15.367382 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:00:15.370764 systemd-logind[1910]: Removed session 6. May 10 00:00:15.394540 systemd[1]: Started sshd@6-172.31.18.167:22-147.75.109.163:56320.service - OpenSSH per-connection server daemon (147.75.109.163:56320). May 10 00:00:15.489748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:15.509252 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:15.576727 sshd[2224]: Accepted publickey for core from 147.75.109.163 port 56320 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:15.580019 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.590294 systemd-logind[1910]: New session 7 of user core. May 10 00:00:15.594865 kubelet[2231]: E0510 00:00:15.594233 2231 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:15.598060 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 00:00:15.604330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:15.604691 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:15.719039 sudo[2240]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 00:00:15.719651 sudo[2240]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:00:15.738366 sudo[2240]: pam_unix(sudo:session): session closed for user root May 10 00:00:15.761876 sshd[2224]: pam_unix(sshd:session): session closed for user core May 10 00:00:15.768109 systemd[1]: sshd@6-172.31.18.167:22-147.75.109.163:56320.service: Deactivated successfully. May 10 00:00:15.772066 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:00:15.773569 systemd-logind[1910]: Session 7 logged out. Waiting for processes to exit. May 10 00:00:15.775455 systemd-logind[1910]: Removed session 7. May 10 00:00:15.799303 systemd[1]: Started sshd@7-172.31.18.167:22-147.75.109.163:56336.service - OpenSSH per-connection server daemon (147.75.109.163:56336). May 10 00:00:15.973150 sshd[2245]: Accepted publickey for core from 147.75.109.163 port 56336 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:15.976003 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.986021 systemd-logind[1910]: New session 8 of user core. May 10 00:00:15.993020 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:00:16.095707 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 00:00:16.096923 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:00:16.103314 sudo[2249]: pam_unix(sudo:session): session closed for user root May 10 00:00:16.113318 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 10 00:00:16.113978 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:00:16.136277 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 10 00:00:16.141705 auditctl[2252]: No rules May 10 00:00:16.144021 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:00:16.145888 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 10 00:00:16.153993 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:00:16.202949 augenrules[2270]: No rules May 10 00:00:16.206839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:00:16.208668 sudo[2248]: pam_unix(sudo:session): session closed for user root May 10 00:00:16.232179 sshd[2245]: pam_unix(sshd:session): session closed for user core May 10 00:00:16.238094 systemd[1]: sshd@7-172.31.18.167:22-147.75.109.163:56336.service: Deactivated successfully. May 10 00:00:16.241029 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:00:16.244259 systemd-logind[1910]: Session 8 logged out. Waiting for processes to exit. May 10 00:00:16.245956 systemd-logind[1910]: Removed session 8. May 10 00:00:16.265881 systemd[1]: Started sshd@8-172.31.18.167:22-147.75.109.163:56344.service - OpenSSH per-connection server daemon (147.75.109.163:56344). May 10 00:00:16.447167 sshd[2278]: Accepted publickey for core from 147.75.109.163 port 56344 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:16.449679 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:16.458852 systemd-logind[1910]: New session 9 of user core. May 10 00:00:16.465074 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:00:16.569386 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:00:16.570047 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:00:17.004264 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 00:00:17.016273 (dockerd)[2297]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 00:00:17.375352 dockerd[2297]: time="2025-05-10T00:00:17.375168271Z" level=info msg="Starting up" May 10 00:00:17.485846 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1369434000-merged.mount: Deactivated successfully. May 10 00:00:17.518148 dockerd[2297]: time="2025-05-10T00:00:17.517694192Z" level=info msg="Loading containers: start." May 10 00:00:17.668803 kernel: Initializing XFRM netlink socket May 10 00:00:17.706448 (udev-worker)[2320]: Network interface NamePolicy= disabled on kernel command line. May 10 00:00:17.797398 systemd-networkd[1846]: docker0: Link UP May 10 00:00:17.824199 dockerd[2297]: time="2025-05-10T00:00:17.824056593Z" level=info msg="Loading containers: done." May 10 00:00:17.848259 dockerd[2297]: time="2025-05-10T00:00:17.848083233Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:00:17.848259 dockerd[2297]: time="2025-05-10T00:00:17.848231877Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 10 00:00:17.848553 dockerd[2297]: time="2025-05-10T00:00:17.848439765Z" level=info msg="Daemon has completed initialization" May 10 00:00:17.910624 dockerd[2297]: time="2025-05-10T00:00:17.909867838Z" level=info msg="API listen on /run/docker.sock" May 10 00:00:17.910966 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 00:00:19.112434 containerd[1935]: time="2025-05-10T00:00:19.112362212Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:00:19.806664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146040328.mount: Deactivated successfully. May 10 00:00:21.251273 containerd[1935]: time="2025-05-10T00:00:21.251198218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:21.253300 containerd[1935]: time="2025-05-10T00:00:21.253245022Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 10 00:00:21.254172 containerd[1935]: time="2025-05-10T00:00:21.253640566Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:21.260206 containerd[1935]: time="2025-05-10T00:00:21.260125954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:21.263892 containerd[1935]: time="2025-05-10T00:00:21.263199766Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.150770114s" May 10 00:00:21.263892 containerd[1935]: time="2025-05-10T00:00:21.263264170Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 10 00:00:21.300100 containerd[1935]: time="2025-05-10T00:00:21.300053927Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:00:22.872133 containerd[1935]: time="2025-05-10T00:00:22.871845482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:22.873986 containerd[1935]: time="2025-05-10T00:00:22.873933362Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 10 00:00:22.875356 containerd[1935]: time="2025-05-10T00:00:22.875272046Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:22.880995 containerd[1935]: time="2025-05-10T00:00:22.880910906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:22.883462 containerd[1935]: time="2025-05-10T00:00:22.883270862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.582975231s" May 10 00:00:22.883462 containerd[1935]: time="2025-05-10T00:00:22.883326926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 10 00:00:22.924216 containerd[1935]: time="2025-05-10T00:00:22.924129579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:00:24.046191 containerd[1935]: time="2025-05-10T00:00:24.046124316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:24.048210 containerd[1935]: time="2025-05-10T00:00:24.048143820Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 10 00:00:24.049633 containerd[1935]: time="2025-05-10T00:00:24.049548948Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:24.055102 containerd[1935]: time="2025-05-10T00:00:24.055023264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:24.060331 containerd[1935]: time="2025-05-10T00:00:24.060242484Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.136017973s" May 10 00:00:24.061814 containerd[1935]: time="2025-05-10T00:00:24.060513936Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 10 00:00:24.106559 containerd[1935]: time="2025-05-10T00:00:24.106511304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:00:25.392734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174144922.mount: Deactivated successfully. May 10 00:00:25.703918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:00:25.712165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:26.039858 containerd[1935]: time="2025-05-10T00:00:26.039721238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:26.042423 containerd[1935]: time="2025-05-10T00:00:26.042361550Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 10 00:00:26.044641 containerd[1935]: time="2025-05-10T00:00:26.044567042Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:26.050187 containerd[1935]: time="2025-05-10T00:00:26.050100542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:26.051815 containerd[1935]: time="2025-05-10T00:00:26.051578378Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.944733342s" May 10 00:00:26.051815 containerd[1935]: time="2025-05-10T00:00:26.051638966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 10 00:00:26.061439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:26.065976 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:26.101831 containerd[1935]: time="2025-05-10T00:00:26.101397014Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:00:26.158157 kubelet[2532]: E0510 00:00:26.158041 2532 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:26.164378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:26.164934 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:26.643599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027864686.mount: Deactivated successfully. May 10 00:00:27.726013 containerd[1935]: time="2025-05-10T00:00:27.725942286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:27.728146 containerd[1935]: time="2025-05-10T00:00:27.728092734Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 10 00:00:27.729052 containerd[1935]: time="2025-05-10T00:00:27.728966682Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:27.734809 containerd[1935]: time="2025-05-10T00:00:27.734706019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:27.737456 containerd[1935]: time="2025-05-10T00:00:27.737244535Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.635771477s" May 10 00:00:27.737456 containerd[1935]: time="2025-05-10T00:00:27.737307955Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:00:27.778818 containerd[1935]: time="2025-05-10T00:00:27.778620619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:00:28.259642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458064611.mount: Deactivated successfully. May 10 00:00:28.273759 containerd[1935]: time="2025-05-10T00:00:28.273429641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:28.275104 containerd[1935]: time="2025-05-10T00:00:28.275040005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 10 00:00:28.276154 containerd[1935]: time="2025-05-10T00:00:28.276072065Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:28.280331 containerd[1935]: time="2025-05-10T00:00:28.280223477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:28.282646 containerd[1935]: time="2025-05-10T00:00:28.282006749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 503.325842ms" May 10 00:00:28.282646 containerd[1935]: time="2025-05-10T00:00:28.282061733Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 10 00:00:28.319567 containerd[1935]: time="2025-05-10T00:00:28.319514117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:00:28.863491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015439223.mount: Deactivated successfully. May 10 00:00:30.847760 containerd[1935]: time="2025-05-10T00:00:30.847684150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:30.875064 containerd[1935]: time="2025-05-10T00:00:30.874979746Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 10 00:00:30.920393 containerd[1935]: time="2025-05-10T00:00:30.920284534Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:30.974170 containerd[1935]: time="2025-05-10T00:00:30.972880643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:30.974832 containerd[1935]: time="2025-05-10T00:00:30.974487419Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.654913074s" May 10 00:00:30.974832 containerd[1935]: time="2025-05-10T00:00:30.974544959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 10 00:00:31.762599 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:00:36.203762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:00:36.213134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:36.525184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:36.534608 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:36.622595 kubelet[2717]: E0510 00:00:36.622360 2717 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:36.627303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:36.628517 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:37.136636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:37.147292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:37.191807 systemd[1]: Reloading requested from client PID 2732 ('systemctl') (unit session-9.scope)... May 10 00:00:37.191835 systemd[1]: Reloading... May 10 00:00:37.420822 zram_generator::config[2775]: No configuration found. May 10 00:00:37.664307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:37.837248 systemd[1]: Reloading finished in 644 ms. May 10 00:00:37.924652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:37.933238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:37.938380 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:00:37.939873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:37.949427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:38.228049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:38.230519 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:00:38.305843 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:38.305843 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:00:38.305843 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:38.306394 kubelet[2837]: I0510 00:00:38.305976 2837 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:00:39.607941 kubelet[2837]: I0510 00:00:39.607869 2837 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:00:39.607941 kubelet[2837]: I0510 00:00:39.607909 2837 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:00:39.608660 kubelet[2837]: I0510 00:00:39.608271 2837 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:00:39.643445 kubelet[2837]: I0510 00:00:39.643016 2837 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:00:39.643445 kubelet[2837]: E0510 00:00:39.643389 2837 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.167:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.657460 kubelet[2837]: I0510 00:00:39.657421 2837 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:00:39.659980 kubelet[2837]: I0510 00:00:39.659922 2837 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:00:39.660838 kubelet[2837]: I0510 00:00:39.660119 2837 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-167","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:00:39.660838 kubelet[2837]: I0510 00:00:39.660443 2837 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:00:39.660838 kubelet[2837]: I0510 00:00:39.660463 2837 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:00:39.660838 kubelet[2837]: I0510 00:00:39.660696 2837 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:39.662259 kubelet[2837]: I0510 00:00:39.662210 2837 kubelet.go:400] "Attempting to sync node with API server" May 10 00:00:39.662259 kubelet[2837]: I0510 00:00:39.662254 2837 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:00:39.662395 kubelet[2837]: I0510 00:00:39.662364 2837 kubelet.go:312] "Adding apiserver pod source" May 10 00:00:39.662492 kubelet[2837]: I0510 00:00:39.662399 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:00:39.664380 kubelet[2837]: W0510 00:00:39.664094 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-167&limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.664380 kubelet[2837]: E0510 00:00:39.664175 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-167&limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.664380 kubelet[2837]: W0510 00:00:39.664297 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.664380 kubelet[2837]: E0510 00:00:39.664352 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.665826 kubelet[2837]: I0510 00:00:39.665285 2837 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:00:39.665826 kubelet[2837]: I0510 00:00:39.665643 2837 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:00:39.665826 kubelet[2837]: W0510 00:00:39.665721 2837 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:00:39.667537 kubelet[2837]: I0510 00:00:39.667501 2837 server.go:1264] "Started kubelet" May 10 00:00:39.670713 kubelet[2837]: I0510 00:00:39.670671 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:00:39.680961 kubelet[2837]: I0510 00:00:39.680925 2837 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:00:39.681872 kubelet[2837]: I0510 00:00:39.681389 2837 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:00:39.682933 kubelet[2837]: I0510 00:00:39.682899 2837 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:00:39.683158 kubelet[2837]: I0510 00:00:39.683074 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:00:39.683315 kubelet[2837]: I0510 00:00:39.683294 2837 reconciler.go:26] "Reconciler: start to sync state" May 10 00:00:39.683881 kubelet[2837]: I0510 00:00:39.683514 2837 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:00:39.684056 kubelet[2837]: I0510 00:00:39.683026 2837 server.go:455] "Adding debug handlers to kubelet server" May 10 00:00:39.688395 kubelet[2837]: E0510 00:00:39.688083 2837 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.167:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.167:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-167.183e0165358375d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-167,UID:ip-172-31-18-167,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-167,},FirstTimestamp:2025-05-10 00:00:39.667463634 +0000 UTC m=+1.430234108,LastTimestamp:2025-05-10 00:00:39.667463634 +0000 UTC m=+1.430234108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-167,}" May 10 00:00:39.688612 kubelet[2837]: W0510 00:00:39.688514 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.688612 kubelet[2837]: E0510 00:00:39.688590 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.688726 kubelet[2837]: E0510 00:00:39.688690 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-167?timeout=10s\": dial tcp 172.31.18.167:6443: connect: connection refused" interval="200ms" May 10 00:00:39.689696 kubelet[2837]: I0510 00:00:39.689242 2837 factory.go:221] Registration of the systemd container factory successfully May 10 00:00:39.689696 kubelet[2837]: I0510 00:00:39.689393 2837 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:00:39.690204 kubelet[2837]: E0510 00:00:39.690153 2837 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:00:39.693876 kubelet[2837]: I0510 00:00:39.693673 2837 factory.go:221] Registration of the containerd container factory successfully May 10 00:00:39.719761 kubelet[2837]: I0510 00:00:39.719700 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:00:39.731352 kubelet[2837]: I0510 00:00:39.731310 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:00:39.731563 kubelet[2837]: I0510 00:00:39.731543 2837 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:00:39.732220 kubelet[2837]: I0510 00:00:39.731664 2837 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:00:39.732220 kubelet[2837]: E0510 00:00:39.731733 2837 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:00:39.740564 kubelet[2837]: W0510 00:00:39.740472 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.740564 kubelet[2837]: E0510 00:00:39.740568 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:39.741926 kubelet[2837]: I0510 00:00:39.741890 2837 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:00:39.741926 kubelet[2837]: I0510 00:00:39.741922 2837 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:00:39.742098 kubelet[2837]: I0510 00:00:39.741955 2837 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:39.744605 kubelet[2837]: I0510 00:00:39.744560 2837 policy_none.go:49] "None policy: Start" May 10 00:00:39.745557 kubelet[2837]: I0510 00:00:39.745524 2837 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:00:39.745684 kubelet[2837]: I0510 00:00:39.745570 2837 state_mem.go:35] "Initializing new in-memory state store" May 10 00:00:39.757030 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 00:00:39.773009 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 00:00:39.784682 kubelet[2837]: I0510 00:00:39.784565 2837 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-167" May 10 00:00:39.785700 kubelet[2837]: E0510 00:00:39.785591 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.167:6443/api/v1/nodes\": dial tcp 172.31.18.167:6443: connect: connection refused" node="ip-172-31-18-167" May 10 00:00:39.790500 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 00:00:39.792978 kubelet[2837]: I0510 00:00:39.792934 2837 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:00:39.793316 kubelet[2837]: I0510 00:00:39.793247 2837 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:00:39.793445 kubelet[2837]: I0510 00:00:39.793413 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:00:39.796934 kubelet[2837]: E0510 00:00:39.796878 2837 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-167\" not found" May 10 00:00:39.832135 kubelet[2837]: I0510 00:00:39.832066 2837 topology_manager.go:215] "Topology Admit Handler" podUID="2ac449d9888007ab5301334ba1a67612" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-167" May 10 00:00:39.834285 kubelet[2837]: I0510 00:00:39.834076 2837 topology_manager.go:215] "Topology Admit Handler" podUID="7f5e46332a74624bc7809914abe772fc" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-167" May 10 00:00:39.836551 kubelet[2837]: I0510 00:00:39.836140 2837 topology_manager.go:215] "Topology Admit Handler" podUID="abd855e552c0d04a5871b10a20767038" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-167" May 10 00:00:39.851084 systemd[1]: Created slice kubepods-burstable-pod2ac449d9888007ab5301334ba1a67612.slice - libcontainer container kubepods-burstable-pod2ac449d9888007ab5301334ba1a67612.slice. May 10 00:00:39.871250 systemd[1]: Created slice kubepods-burstable-pod7f5e46332a74624bc7809914abe772fc.slice - libcontainer container kubepods-burstable-pod7f5e46332a74624bc7809914abe772fc.slice. May 10 00:00:39.890274 kubelet[2837]: E0510 00:00:39.890170 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-167?timeout=10s\": dial tcp 172.31.18.167:6443: connect: connection refused" interval="400ms" May 10 00:00:39.893494 systemd[1]: Created slice kubepods-burstable-podabd855e552c0d04a5871b10a20767038.slice - libcontainer container kubepods-burstable-podabd855e552c0d04a5871b10a20767038.slice. May 10 00:00:39.984480 kubelet[2837]: I0510 00:00:39.984418 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ac449d9888007ab5301334ba1a67612-ca-certs\") pod \"kube-apiserver-ip-172-31-18-167\" (UID: \"2ac449d9888007ab5301334ba1a67612\") " pod="kube-system/kube-apiserver-ip-172-31-18-167" May 10 00:00:39.984606 kubelet[2837]: I0510 00:00:39.984486 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:39.984606 kubelet[2837]: I0510 00:00:39.984524 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:39.984606 kubelet[2837]: I0510 00:00:39.984563 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:39.984812 kubelet[2837]: I0510 00:00:39.984604 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:39.984812 kubelet[2837]: I0510 00:00:39.984642 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ac449d9888007ab5301334ba1a67612-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-167\" (UID: \"2ac449d9888007ab5301334ba1a67612\") " pod="kube-system/kube-apiserver-ip-172-31-18-167" May 10 00:00:39.984812 kubelet[2837]: I0510 00:00:39.984678 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ac449d9888007ab5301334ba1a67612-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-167\" (UID: \"2ac449d9888007ab5301334ba1a67612\") " pod="kube-system/kube-apiserver-ip-172-31-18-167" May 10 00:00:39.984812 kubelet[2837]: I0510 00:00:39.984714 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:39.984812 kubelet[2837]: I0510 00:00:39.984750 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abd855e552c0d04a5871b10a20767038-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-167\" (UID: \"abd855e552c0d04a5871b10a20767038\") " pod="kube-system/kube-scheduler-ip-172-31-18-167" May 10 00:00:39.988404 kubelet[2837]: I0510 00:00:39.988343 2837 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-167" May 10 00:00:39.988871 kubelet[2837]: E0510 00:00:39.988821 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.167:6443/api/v1/nodes\": dial tcp 172.31.18.167:6443: connect: connection refused" node="ip-172-31-18-167" May 10 00:00:40.166033 containerd[1935]: time="2025-05-10T00:00:40.165754816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-167,Uid:2ac449d9888007ab5301334ba1a67612,Namespace:kube-system,Attempt:0,}" May 10 00:00:40.188464 containerd[1935]: time="2025-05-10T00:00:40.188078356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-167,Uid:7f5e46332a74624bc7809914abe772fc,Namespace:kube-system,Attempt:0,}" May 10 00:00:40.201081 containerd[1935]: time="2025-05-10T00:00:40.201031000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-167,Uid:abd855e552c0d04a5871b10a20767038,Namespace:kube-system,Attempt:0,}" May 10 00:00:40.291443 kubelet[2837]: E0510 00:00:40.291367 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-167?timeout=10s\": dial tcp 172.31.18.167:6443: connect: connection refused" interval="800ms" May 10 00:00:40.390818 kubelet[2837]: I0510 00:00:40.390748 2837 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-167" May 10 00:00:40.391322 kubelet[2837]: E0510 00:00:40.391243 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.167:6443/api/v1/nodes\": dial tcp 172.31.18.167:6443: connect: connection refused" node="ip-172-31-18-167" May 10 00:00:40.650848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191181995.mount: Deactivated successfully. May 10 00:00:40.658012 containerd[1935]: time="2025-05-10T00:00:40.657870007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:40.660469 containerd[1935]: time="2025-05-10T00:00:40.660364687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:40.661436 containerd[1935]: time="2025-05-10T00:00:40.661390639Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:00:40.663385 containerd[1935]: time="2025-05-10T00:00:40.663336631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 10 00:00:40.664978 containerd[1935]: time="2025-05-10T00:00:40.664756399Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:40.666728 containerd[1935]: time="2025-05-10T00:00:40.666624811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:00:40.670233 containerd[1935]: time="2025-05-10T00:00:40.670187419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:40.675431 containerd[1935]: time="2025-05-10T00:00:40.675101671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.200563ms" May 10 00:00:40.676591 containerd[1935]: time="2025-05-10T00:00:40.676487239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:40.680978 containerd[1935]: time="2025-05-10T00:00:40.680148991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.963355ms" May 10 00:00:40.681182 kubelet[2837]: W0510 00:00:40.681085 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:40.681182 kubelet[2837]: E0510 00:00:40.681170 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:40.683399 containerd[1935]: time="2025-05-10T00:00:40.683044627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 481.593771ms" May 10 00:00:40.927298 containerd[1935]: time="2025-05-10T00:00:40.927047084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:40.927298 containerd[1935]: time="2025-05-10T00:00:40.927218276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:40.927488 containerd[1935]: time="2025-05-10T00:00:40.927289844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:40.930414 containerd[1935]: time="2025-05-10T00:00:40.927050900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:40.930599 containerd[1935]: time="2025-05-10T00:00:40.930526256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:40.930757 containerd[1935]: time="2025-05-10T00:00:40.930560912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:40.932062 containerd[1935]: time="2025-05-10T00:00:40.930724304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:40.933732 containerd[1935]: time="2025-05-10T00:00:40.933104660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:40.933732 containerd[1935]: time="2025-05-10T00:00:40.933209768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:40.933732 containerd[1935]: time="2025-05-10T00:00:40.933247472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:40.934350 containerd[1935]: time="2025-05-10T00:00:40.933594296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:40.935353 containerd[1935]: time="2025-05-10T00:00:40.933390272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:40.954202 kubelet[2837]: W0510 00:00:40.953978 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-167&limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:40.954202 kubelet[2837]: E0510 00:00:40.954074 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-167&limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:40.977656 systemd[1]: Started cri-containerd-7f9f64e5f410019b250e6ea8bb6ea07ba07cf42ffcc13e580493324d94fcc98e.scope - libcontainer container 7f9f64e5f410019b250e6ea8bb6ea07ba07cf42ffcc13e580493324d94fcc98e. May 10 00:00:41.001107 systemd[1]: Started cri-containerd-1aa19a391e5798b4917669fc12d5c08af37dd50dac4e3659bbd25506a8e6ed19.scope - libcontainer container 1aa19a391e5798b4917669fc12d5c08af37dd50dac4e3659bbd25506a8e6ed19. May 10 00:00:41.005380 systemd[1]: Started cri-containerd-dd42eda7390683fcdf69171452ca946f95895b953726191ba6be20200e51a585.scope - libcontainer container dd42eda7390683fcdf69171452ca946f95895b953726191ba6be20200e51a585. May 10 00:00:41.087081 containerd[1935]: time="2025-05-10T00:00:41.086999549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-167,Uid:2ac449d9888007ab5301334ba1a67612,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f9f64e5f410019b250e6ea8bb6ea07ba07cf42ffcc13e580493324d94fcc98e\"" May 10 00:00:41.093350 kubelet[2837]: E0510 00:00:41.092689 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-167?timeout=10s\": dial tcp 172.31.18.167:6443: connect: connection refused" interval="1.6s" May 10 00:00:41.103559 containerd[1935]: time="2025-05-10T00:00:41.103135469Z" level=info msg="CreateContainer within sandbox \"7f9f64e5f410019b250e6ea8bb6ea07ba07cf42ffcc13e580493324d94fcc98e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:00:41.126721 containerd[1935]: time="2025-05-10T00:00:41.126143357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-167,Uid:7f5e46332a74624bc7809914abe772fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd42eda7390683fcdf69171452ca946f95895b953726191ba6be20200e51a585\"" May 10 00:00:41.136522 containerd[1935]: time="2025-05-10T00:00:41.136451969Z" level=info msg="CreateContainer within sandbox \"dd42eda7390683fcdf69171452ca946f95895b953726191ba6be20200e51a585\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:00:41.138475 containerd[1935]: time="2025-05-10T00:00:41.138327893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-167,Uid:abd855e552c0d04a5871b10a20767038,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aa19a391e5798b4917669fc12d5c08af37dd50dac4e3659bbd25506a8e6ed19\"" May 10 00:00:41.145228 containerd[1935]: time="2025-05-10T00:00:41.145011893Z" level=info msg="CreateContainer within sandbox \"1aa19a391e5798b4917669fc12d5c08af37dd50dac4e3659bbd25506a8e6ed19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:00:41.151632 containerd[1935]: time="2025-05-10T00:00:41.151465445Z" level=info msg="CreateContainer within sandbox \"7f9f64e5f410019b250e6ea8bb6ea07ba07cf42ffcc13e580493324d94fcc98e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe2a8fe3cd013bb5511142e7d33ada9de55053cb74ee2e13e27c93c9475ff6c1\"" May 10 00:00:41.152490 containerd[1935]: time="2025-05-10T00:00:41.152378861Z" level=info msg="StartContainer for \"fe2a8fe3cd013bb5511142e7d33ada9de55053cb74ee2e13e27c93c9475ff6c1\"" May 10 00:00:41.162209 kubelet[2837]: W0510 00:00:41.161649 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:41.162209 kubelet[2837]: E0510 00:00:41.161742 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.167:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:41.180455 containerd[1935]: time="2025-05-10T00:00:41.179215421Z" level=info msg="CreateContainer within sandbox \"1aa19a391e5798b4917669fc12d5c08af37dd50dac4e3659bbd25506a8e6ed19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03\"" May 10 00:00:41.182885 containerd[1935]: time="2025-05-10T00:00:41.182494973Z" level=info msg="StartContainer for \"b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03\"" May 10 00:00:41.194204 kubelet[2837]: I0510 00:00:41.194164 2837 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-167" May 10 00:00:41.195632 kubelet[2837]: E0510 00:00:41.195569 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.167:6443/api/v1/nodes\": dial tcp 172.31.18.167:6443: connect: connection refused" node="ip-172-31-18-167" May 10 00:00:41.200769 containerd[1935]: time="2025-05-10T00:00:41.200591837Z" level=info msg="CreateContainer within sandbox \"dd42eda7390683fcdf69171452ca946f95895b953726191ba6be20200e51a585\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c\"" May 10 00:00:41.203879 containerd[1935]: time="2025-05-10T00:00:41.203356769Z" level=info msg="StartContainer for \"877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c\"" May 10 00:00:41.209558 systemd[1]: Started cri-containerd-fe2a8fe3cd013bb5511142e7d33ada9de55053cb74ee2e13e27c93c9475ff6c1.scope - libcontainer container fe2a8fe3cd013bb5511142e7d33ada9de55053cb74ee2e13e27c93c9475ff6c1. May 10 00:00:41.282465 systemd[1]: Started cri-containerd-b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03.scope - libcontainer container b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03. May 10 00:00:41.298374 systemd[1]: Started cri-containerd-877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c.scope - libcontainer container 877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c. May 10 00:00:41.339196 kubelet[2837]: W0510 00:00:41.338966 2837 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:41.339196 kubelet[2837]: E0510 00:00:41.339053 2837 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.167:6443: connect: connection refused May 10 00:00:41.351144 containerd[1935]: time="2025-05-10T00:00:41.351069282Z" level=info msg="StartContainer for \"fe2a8fe3cd013bb5511142e7d33ada9de55053cb74ee2e13e27c93c9475ff6c1\" returns successfully" May 10 00:00:41.420567 containerd[1935]: time="2025-05-10T00:00:41.420416262Z" level=info msg="StartContainer for \"b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03\" returns successfully" May 10 00:00:41.429396 containerd[1935]: time="2025-05-10T00:00:41.429316495Z" level=info msg="StartContainer for \"877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c\" returns successfully" May 10 00:00:42.799037 kubelet[2837]: I0510 00:00:42.798192 2837 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-167" May 10 00:00:45.245915 kubelet[2837]: E0510 00:00:45.245843 2837 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-167\" not found" node="ip-172-31-18-167" May 10 00:00:45.298831 kubelet[2837]: E0510 00:00:45.296277 2837 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-167.183e0165358375d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-167,UID:ip-172-31-18-167,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-167,},FirstTimestamp:2025-05-10 00:00:39.667463634 +0000 UTC m=+1.430234108,LastTimestamp:2025-05-10 00:00:39.667463634 +0000 UTC m=+1.430234108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-167,}" May 10 00:00:45.339342 kubelet[2837]: I0510 00:00:45.338617 2837 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-167" May 10 00:00:45.667087 kubelet[2837]: I0510 00:00:45.666482 2837 apiserver.go:52] "Watching apiserver" May 10 00:00:45.684268 kubelet[2837]: I0510 00:00:45.683995 2837 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:00:45.799614 update_engine[1911]: I20250510 00:00:45.799531 1911 update_attempter.cc:509] Updating boot flags... May 10 00:00:45.937359 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3125) May 10 00:00:46.371817 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3116) May 10 00:00:47.703825 systemd[1]: Reloading requested from client PID 3294 ('systemctl') (unit session-9.scope)... May 10 00:00:47.703886 systemd[1]: Reloading... May 10 00:00:47.962819 zram_generator::config[3340]: No configuration found. May 10 00:00:48.194603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:48.404501 systemd[1]: Reloading finished in 699 ms. May 10 00:00:48.486965 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:48.509648 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:00:48.511875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:48.512193 systemd[1]: kubelet.service: Consumed 2.153s CPU time, 113.3M memory peak, 0B memory swap peak. May 10 00:00:48.526561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:48.904305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:48.916564 (kubelet)[3394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:00:49.064819 kubelet[3394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:49.065813 kubelet[3394]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:00:49.065813 kubelet[3394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:49.065813 kubelet[3394]: I0510 00:00:49.065592 3394 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:00:49.090419 kubelet[3394]: I0510 00:00:49.090360 3394 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:00:49.090419 kubelet[3394]: I0510 00:00:49.090408 3394 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:00:49.090851 kubelet[3394]: I0510 00:00:49.090816 3394 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:00:49.093567 kubelet[3394]: I0510 00:00:49.093517 3394 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:00:49.098857 kubelet[3394]: I0510 00:00:49.098154 3394 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:00:49.119842 kubelet[3394]: I0510 00:00:49.119725 3394 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:00:49.120312 kubelet[3394]: I0510 00:00:49.120260 3394 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:00:49.120601 kubelet[3394]: I0510 00:00:49.120310 3394 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-167","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:00:49.120759 kubelet[3394]: I0510 00:00:49.120629 3394 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:00:49.120759 kubelet[3394]: I0510 00:00:49.120653 3394 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:00:49.120759 kubelet[3394]: I0510 00:00:49.120727 3394 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:49.120972 kubelet[3394]: I0510 00:00:49.120956 3394 kubelet.go:400] "Attempting to sync node with API server" May 10 00:00:49.121025 kubelet[3394]: I0510 00:00:49.120981 3394 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:00:49.121081 kubelet[3394]: I0510 00:00:49.121054 3394 kubelet.go:312] "Adding apiserver pod source" May 10 00:00:49.121145 kubelet[3394]: I0510 00:00:49.121090 3394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:00:49.128901 kubelet[3394]: I0510 00:00:49.128842 3394 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:00:49.129170 kubelet[3394]: I0510 00:00:49.129137 3394 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:00:49.130805 kubelet[3394]: I0510 00:00:49.129813 3394 server.go:1264] "Started kubelet" May 10 00:00:49.144278 kubelet[3394]: I0510 00:00:49.144189 3394 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:00:49.145797 kubelet[3394]: I0510 00:00:49.145733 3394 server.go:455] "Adding debug handlers to kubelet server" May 10 00:00:49.168904 kubelet[3394]: I0510 00:00:49.168487 3394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:00:49.171385 kubelet[3394]: I0510 00:00:49.170666 3394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:00:49.171385 kubelet[3394]: I0510 00:00:49.171147 3394 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:00:49.207704 kubelet[3394]: I0510 00:00:49.207078 3394 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:00:49.209923 kubelet[3394]: I0510 00:00:49.209584 3394 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:00:49.210047 kubelet[3394]: I0510 00:00:49.209926 3394 reconciler.go:26] "Reconciler: start to sync state" May 10 00:00:49.220827 kubelet[3394]: I0510 00:00:49.219930 3394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:00:49.224410 kubelet[3394]: I0510 00:00:49.224288 3394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:00:49.224410 kubelet[3394]: I0510 00:00:49.224361 3394 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:00:49.224410 kubelet[3394]: I0510 00:00:49.224398 3394 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:00:49.224648 kubelet[3394]: E0510 00:00:49.224473 3394 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:00:49.251389 kubelet[3394]: I0510 00:00:49.249492 3394 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:00:49.257834 kubelet[3394]: E0510 00:00:49.255863 3394 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:00:49.263831 kubelet[3394]: I0510 00:00:49.262711 3394 factory.go:221] Registration of the containerd container factory successfully May 10 00:00:49.263831 kubelet[3394]: I0510 00:00:49.262811 3394 factory.go:221] Registration of the systemd container factory successfully May 10 00:00:49.321947 kubelet[3394]: I0510 00:00:49.321729 3394 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-167" May 10 00:00:49.326280 kubelet[3394]: E0510 00:00:49.325005 3394 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:00:49.351839 kubelet[3394]: I0510 00:00:49.351226 3394 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-167" May 10 00:00:49.351839 kubelet[3394]: I0510 00:00:49.351345 3394 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-167" May 10 00:00:49.429901 kubelet[3394]: I0510 00:00:49.429334 3394 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:00:49.429901 kubelet[3394]: I0510 00:00:49.429367 3394 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:00:49.429901 kubelet[3394]: I0510 00:00:49.429403 3394 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:49.432135 kubelet[3394]: I0510 00:00:49.432080 3394 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:00:49.432719 kubelet[3394]: I0510 00:00:49.432123 3394 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:00:49.432719 kubelet[3394]: I0510 00:00:49.432183 3394 policy_none.go:49] "None policy: Start" May 10 00:00:49.436525 kubelet[3394]: I0510 00:00:49.435956 3394 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:00:49.436525 kubelet[3394]: I0510 00:00:49.436005 3394 state_mem.go:35] "Initializing new in-memory state store" May 10 00:00:49.436525 kubelet[3394]: I0510 00:00:49.436326 3394 state_mem.go:75] "Updated machine memory state" May 10 00:00:49.456793 kubelet[3394]: I0510 00:00:49.455307 3394 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:00:49.460165 kubelet[3394]: I0510 00:00:49.457464 3394 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:00:49.460165 kubelet[3394]: I0510 00:00:49.458539 3394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:00:49.525511 kubelet[3394]: I0510 00:00:49.525461 3394 topology_manager.go:215] "Topology Admit Handler" podUID="2ac449d9888007ab5301334ba1a67612" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-167" May 10 00:00:49.526043 kubelet[3394]: I0510 00:00:49.526013 3394 topology_manager.go:215] "Topology Admit Handler" podUID="7f5e46332a74624bc7809914abe772fc" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-167" May 10 00:00:49.526498 kubelet[3394]: I0510 00:00:49.526473 3394 topology_manager.go:215] "Topology Admit Handler" podUID="abd855e552c0d04a5871b10a20767038" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-167" May 10 00:00:49.615743 kubelet[3394]: I0510 00:00:49.614831 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:49.616245 kubelet[3394]: I0510 00:00:49.616019 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ac449d9888007ab5301334ba1a67612-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-167\" (UID: \"2ac449d9888007ab5301334ba1a67612\") " pod="kube-system/kube-apiserver-ip-172-31-18-167" May 10 00:00:49.616245 kubelet[3394]: I0510 00:00:49.616072 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ac449d9888007ab5301334ba1a67612-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-167\" (UID: \"2ac449d9888007ab5301334ba1a67612\") " pod="kube-system/kube-apiserver-ip-172-31-18-167" May 10 00:00:49.617475 kubelet[3394]: I0510 00:00:49.616157 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:49.617475 kubelet[3394]: I0510 00:00:49.616603 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:49.618073 kubelet[3394]: I0510 00:00:49.617804 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:49.618073 kubelet[3394]: I0510 00:00:49.617940 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f5e46332a74624bc7809914abe772fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-167\" (UID: \"7f5e46332a74624bc7809914abe772fc\") " pod="kube-system/kube-controller-manager-ip-172-31-18-167" May 10 00:00:49.618462 kubelet[3394]: I0510 00:00:49.618035 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abd855e552c0d04a5871b10a20767038-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-167\" (UID: \"abd855e552c0d04a5871b10a20767038\") " pod="kube-system/kube-scheduler-ip-172-31-18-167" May 10 00:00:49.618462 kubelet[3394]: I0510 00:00:49.618213 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ac449d9888007ab5301334ba1a67612-ca-certs\") pod \"kube-apiserver-ip-172-31-18-167\" (UID: \"2ac449d9888007ab5301334ba1a67612\") " pod="kube-system/kube-apiserver-ip-172-31-18-167" May 10 00:00:50.124225 kubelet[3394]: I0510 00:00:50.123847 3394 apiserver.go:52] "Watching apiserver" May 10 00:00:50.210678 kubelet[3394]: I0510 00:00:50.210596 3394 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:00:50.415514 kubelet[3394]: I0510 00:00:50.415235 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-167" podStartSLOduration=1.415213719 podStartE2EDuration="1.415213719s" podCreationTimestamp="2025-05-10 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:50.375621795 +0000 UTC m=+1.446374420" watchObservedRunningTime="2025-05-10 00:00:50.415213719 +0000 UTC m=+1.485966320" May 10 00:00:50.444360 kubelet[3394]: I0510 00:00:50.443456 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-167" podStartSLOduration=1.443434011 podStartE2EDuration="1.443434011s" podCreationTimestamp="2025-05-10 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:50.415169943 +0000 UTC m=+1.485922556" watchObservedRunningTime="2025-05-10 00:00:50.443434011 +0000 UTC m=+1.514186624" May 10 00:00:50.445360 kubelet[3394]: I0510 00:00:50.444966 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-167" podStartSLOduration=1.444943983 podStartE2EDuration="1.444943983s" podCreationTimestamp="2025-05-10 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:50.444587979 +0000 UTC m=+1.515340592" watchObservedRunningTime="2025-05-10 00:00:50.444943983 +0000 UTC m=+1.515696692" May 10 00:00:54.251412 sudo[2281]: pam_unix(sudo:session): session closed for user root May 10 00:00:54.275192 sshd[2278]: pam_unix(sshd:session): session closed for user core May 10 00:00:54.281432 systemd[1]: sshd@8-172.31.18.167:22-147.75.109.163:56344.service: Deactivated successfully. May 10 00:00:54.285554 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:00:54.286122 systemd[1]: session-9.scope: Consumed 9.553s CPU time, 186.6M memory peak, 0B memory swap peak. May 10 00:00:54.289056 systemd-logind[1910]: Session 9 logged out. Waiting for processes to exit. May 10 00:00:54.291969 systemd-logind[1910]: Removed session 9. May 10 00:01:02.530669 kubelet[3394]: I0510 00:01:02.529975 3394 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:01:02.531287 containerd[1935]: time="2025-05-10T00:01:02.530526003Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:01:02.531698 kubelet[3394]: I0510 00:01:02.530832 3394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:01:02.686852 kubelet[3394]: I0510 00:01:02.686768 3394 topology_manager.go:215] "Topology Admit Handler" podUID="a6668da4-f486-4579-8705-0fd3a9ac38ef" podNamespace="kube-system" podName="kube-proxy-lv4cn" May 10 00:01:02.703190 kubelet[3394]: I0510 00:01:02.702919 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6668da4-f486-4579-8705-0fd3a9ac38ef-xtables-lock\") pod \"kube-proxy-lv4cn\" (UID: \"a6668da4-f486-4579-8705-0fd3a9ac38ef\") " pod="kube-system/kube-proxy-lv4cn" May 10 00:01:02.703190 kubelet[3394]: I0510 00:01:02.702996 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5b9g\" (UniqueName: \"kubernetes.io/projected/a6668da4-f486-4579-8705-0fd3a9ac38ef-kube-api-access-l5b9g\") pod \"kube-proxy-lv4cn\" (UID: \"a6668da4-f486-4579-8705-0fd3a9ac38ef\") " pod="kube-system/kube-proxy-lv4cn" May 10 00:01:02.703190 kubelet[3394]: I0510 00:01:02.703041 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6668da4-f486-4579-8705-0fd3a9ac38ef-kube-proxy\") pod \"kube-proxy-lv4cn\" (UID: \"a6668da4-f486-4579-8705-0fd3a9ac38ef\") " pod="kube-system/kube-proxy-lv4cn" May 10 00:01:02.703190 kubelet[3394]: I0510 00:01:02.703083 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6668da4-f486-4579-8705-0fd3a9ac38ef-lib-modules\") pod \"kube-proxy-lv4cn\" (UID: \"a6668da4-f486-4579-8705-0fd3a9ac38ef\") " pod="kube-system/kube-proxy-lv4cn" May 10 00:01:02.707453 systemd[1]: Created slice kubepods-besteffort-poda6668da4_f486_4579_8705_0fd3a9ac38ef.slice - libcontainer container kubepods-besteffort-poda6668da4_f486_4579_8705_0fd3a9ac38ef.slice. May 10 00:01:03.020356 containerd[1935]: time="2025-05-10T00:01:03.020276018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv4cn,Uid:a6668da4-f486-4579-8705-0fd3a9ac38ef,Namespace:kube-system,Attempt:0,}" May 10 00:01:03.091468 kubelet[3394]: I0510 00:01:03.089929 3394 topology_manager.go:215] "Topology Admit Handler" podUID="bf332880-ac89-49f1-ac85-c63a82973a24" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-vf67x" May 10 00:01:03.105870 containerd[1935]: time="2025-05-10T00:01:03.102578378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:03.105870 containerd[1935]: time="2025-05-10T00:01:03.102675170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:03.105870 containerd[1935]: time="2025-05-10T00:01:03.102712154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:03.105870 containerd[1935]: time="2025-05-10T00:01:03.102912194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:03.109511 kubelet[3394]: I0510 00:01:03.109435 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf332880-ac89-49f1-ac85-c63a82973a24-var-lib-calico\") pod \"tigera-operator-797db67f8-vf67x\" (UID: \"bf332880-ac89-49f1-ac85-c63a82973a24\") " pod="tigera-operator/tigera-operator-797db67f8-vf67x" May 10 00:01:03.110151 kubelet[3394]: I0510 00:01:03.110098 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246c4\" (UniqueName: \"kubernetes.io/projected/bf332880-ac89-49f1-ac85-c63a82973a24-kube-api-access-246c4\") pod \"tigera-operator-797db67f8-vf67x\" (UID: \"bf332880-ac89-49f1-ac85-c63a82973a24\") " pod="tigera-operator/tigera-operator-797db67f8-vf67x" May 10 00:01:03.126150 systemd[1]: Created slice kubepods-besteffort-podbf332880_ac89_49f1_ac85_c63a82973a24.slice - libcontainer container kubepods-besteffort-podbf332880_ac89_49f1_ac85_c63a82973a24.slice. May 10 00:01:03.170837 systemd[1]: Started cri-containerd-639359140f8ed0190b4ea6e7d6f18dd45677c95682a75176c2719f208e69ef40.scope - libcontainer container 639359140f8ed0190b4ea6e7d6f18dd45677c95682a75176c2719f208e69ef40. May 10 00:01:03.222804 containerd[1935]: time="2025-05-10T00:01:03.221955639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv4cn,Uid:a6668da4-f486-4579-8705-0fd3a9ac38ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"639359140f8ed0190b4ea6e7d6f18dd45677c95682a75176c2719f208e69ef40\"" May 10 00:01:03.232658 containerd[1935]: time="2025-05-10T00:01:03.232570563Z" level=info msg="CreateContainer within sandbox \"639359140f8ed0190b4ea6e7d6f18dd45677c95682a75176c2719f208e69ef40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:01:03.268375 containerd[1935]: time="2025-05-10T00:01:03.268308567Z" level=info msg="CreateContainer within sandbox \"639359140f8ed0190b4ea6e7d6f18dd45677c95682a75176c2719f208e69ef40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b25651e13202fc52587a9fae5651ecbf77fcb300212cdda79af6f522f205a2a9\"" May 10 00:01:03.269457 containerd[1935]: time="2025-05-10T00:01:03.269398383Z" level=info msg="StartContainer for \"b25651e13202fc52587a9fae5651ecbf77fcb300212cdda79af6f522f205a2a9\"" May 10 00:01:03.316333 systemd[1]: Started cri-containerd-b25651e13202fc52587a9fae5651ecbf77fcb300212cdda79af6f522f205a2a9.scope - libcontainer container b25651e13202fc52587a9fae5651ecbf77fcb300212cdda79af6f522f205a2a9. May 10 00:01:03.382935 containerd[1935]: time="2025-05-10T00:01:03.382825144Z" level=info msg="StartContainer for \"b25651e13202fc52587a9fae5651ecbf77fcb300212cdda79af6f522f205a2a9\" returns successfully" May 10 00:01:03.437461 containerd[1935]: time="2025-05-10T00:01:03.436897588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-vf67x,Uid:bf332880-ac89-49f1-ac85-c63a82973a24,Namespace:tigera-operator,Attempt:0,}" May 10 00:01:03.496248 containerd[1935]: time="2025-05-10T00:01:03.495765652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:03.496248 containerd[1935]: time="2025-05-10T00:01:03.495899332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:03.496563 containerd[1935]: time="2025-05-10T00:01:03.495951040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:03.496563 containerd[1935]: time="2025-05-10T00:01:03.496155784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:03.538065 systemd[1]: Started cri-containerd-8174d897e4058c7575a26787080bf584298eebee171f2166571f0bc4730f09ba.scope - libcontainer container 8174d897e4058c7575a26787080bf584298eebee171f2166571f0bc4730f09ba. May 10 00:01:03.615116 containerd[1935]: time="2025-05-10T00:01:03.614590733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-vf67x,Uid:bf332880-ac89-49f1-ac85-c63a82973a24,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8174d897e4058c7575a26787080bf584298eebee171f2166571f0bc4730f09ba\"" May 10 00:01:03.619671 containerd[1935]: time="2025-05-10T00:01:03.619601141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 00:01:05.473060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229673312.mount: Deactivated successfully. May 10 00:01:06.069681 containerd[1935]: time="2025-05-10T00:01:06.068823257Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:06.070898 containerd[1935]: time="2025-05-10T00:01:06.070690349Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 10 00:01:06.073136 containerd[1935]: time="2025-05-10T00:01:06.073061465Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:06.078075 containerd[1935]: time="2025-05-10T00:01:06.077975285Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:06.079958 containerd[1935]: time="2025-05-10T00:01:06.079740689Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.460051504s" May 10 00:01:06.079958 containerd[1935]: time="2025-05-10T00:01:06.079826069Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 10 00:01:06.084880 containerd[1935]: time="2025-05-10T00:01:06.084663533Z" level=info msg="CreateContainer within sandbox \"8174d897e4058c7575a26787080bf584298eebee171f2166571f0bc4730f09ba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 00:01:06.111351 containerd[1935]: time="2025-05-10T00:01:06.111256925Z" level=info msg="CreateContainer within sandbox \"8174d897e4058c7575a26787080bf584298eebee171f2166571f0bc4730f09ba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087\"" May 10 00:01:06.115264 containerd[1935]: time="2025-05-10T00:01:06.115052945Z" level=info msg="StartContainer for \"925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087\"" May 10 00:01:06.171078 systemd[1]: Started cri-containerd-925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087.scope - libcontainer container 925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087. May 10 00:01:06.215541 containerd[1935]: time="2025-05-10T00:01:06.215478402Z" level=info msg="StartContainer for \"925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087\" returns successfully" May 10 00:01:06.378240 kubelet[3394]: I0510 00:01:06.378043 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lv4cn" podStartSLOduration=4.378019242 podStartE2EDuration="4.378019242s" podCreationTimestamp="2025-05-10 00:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:04.370911424 +0000 UTC m=+15.441664109" watchObservedRunningTime="2025-05-10 00:01:06.378019242 +0000 UTC m=+17.448771843" May 10 00:01:11.983800 kubelet[3394]: I0510 00:01:11.983662 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-vf67x" podStartSLOduration=7.519509206 podStartE2EDuration="9.98363765s" podCreationTimestamp="2025-05-10 00:01:02 +0000 UTC" firstStartedPulling="2025-05-10 00:01:03.617696753 +0000 UTC m=+14.688449354" lastFinishedPulling="2025-05-10 00:01:06.081825197 +0000 UTC m=+17.152577798" observedRunningTime="2025-05-10 00:01:06.380961318 +0000 UTC m=+17.451713907" watchObservedRunningTime="2025-05-10 00:01:11.98363765 +0000 UTC m=+23.054390263" May 10 00:01:11.986639 kubelet[3394]: I0510 00:01:11.985575 3394 topology_manager.go:215] "Topology Admit Handler" podUID="49daddfb-451f-4dfe-8deb-6b014b258aae" podNamespace="calico-system" podName="calico-typha-bf5c99ddf-6wxd6" May 10 00:01:12.008077 systemd[1]: Created slice kubepods-besteffort-pod49daddfb_451f_4dfe_8deb_6b014b258aae.slice - libcontainer container kubepods-besteffort-pod49daddfb_451f_4dfe_8deb_6b014b258aae.slice. May 10 00:01:12.074225 kubelet[3394]: I0510 00:01:12.073465 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49daddfb-451f-4dfe-8deb-6b014b258aae-tigera-ca-bundle\") pod \"calico-typha-bf5c99ddf-6wxd6\" (UID: \"49daddfb-451f-4dfe-8deb-6b014b258aae\") " pod="calico-system/calico-typha-bf5c99ddf-6wxd6" May 10 00:01:12.074225 kubelet[3394]: I0510 00:01:12.073543 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftxts\" (UniqueName: \"kubernetes.io/projected/49daddfb-451f-4dfe-8deb-6b014b258aae-kube-api-access-ftxts\") pod \"calico-typha-bf5c99ddf-6wxd6\" (UID: \"49daddfb-451f-4dfe-8deb-6b014b258aae\") " pod="calico-system/calico-typha-bf5c99ddf-6wxd6" May 10 00:01:12.074225 kubelet[3394]: I0510 00:01:12.073587 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/49daddfb-451f-4dfe-8deb-6b014b258aae-typha-certs\") pod \"calico-typha-bf5c99ddf-6wxd6\" (UID: \"49daddfb-451f-4dfe-8deb-6b014b258aae\") " pod="calico-system/calico-typha-bf5c99ddf-6wxd6" May 10 00:01:12.191763 kubelet[3394]: I0510 00:01:12.190218 3394 topology_manager.go:215] "Topology Admit Handler" podUID="c864c119-49f5-4e01-96de-0caf85ba913f" podNamespace="calico-system" podName="calico-node-f66gn" May 10 00:01:12.235505 systemd[1]: Created slice kubepods-besteffort-podc864c119_49f5_4e01_96de_0caf85ba913f.slice - libcontainer container kubepods-besteffort-podc864c119_49f5_4e01_96de_0caf85ba913f.slice. May 10 00:01:12.274670 kubelet[3394]: I0510 00:01:12.274591 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-cni-net-dir\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.274670 kubelet[3394]: I0510 00:01:12.274670 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-flexvol-driver-host\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.274968 kubelet[3394]: I0510 00:01:12.274715 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-var-lib-calico\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.274968 kubelet[3394]: I0510 00:01:12.274752 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-policysync\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.274968 kubelet[3394]: I0510 00:01:12.274815 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-cni-bin-dir\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.274968 kubelet[3394]: I0510 00:01:12.274856 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-lib-modules\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.274968 kubelet[3394]: I0510 00:01:12.274891 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-xtables-lock\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.275233 kubelet[3394]: I0510 00:01:12.274928 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c864c119-49f5-4e01-96de-0caf85ba913f-tigera-ca-bundle\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.275233 kubelet[3394]: I0510 00:01:12.274961 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c864c119-49f5-4e01-96de-0caf85ba913f-node-certs\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.275233 kubelet[3394]: I0510 00:01:12.274997 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4gg\" (UniqueName: \"kubernetes.io/projected/c864c119-49f5-4e01-96de-0caf85ba913f-kube-api-access-7s4gg\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.275233 kubelet[3394]: I0510 00:01:12.275032 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-var-run-calico\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.275233 kubelet[3394]: I0510 00:01:12.275073 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c864c119-49f5-4e01-96de-0caf85ba913f-cni-log-dir\") pod \"calico-node-f66gn\" (UID: \"c864c119-49f5-4e01-96de-0caf85ba913f\") " pod="calico-system/calico-node-f66gn" May 10 00:01:12.317868 containerd[1935]: time="2025-05-10T00:01:12.317797692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf5c99ddf-6wxd6,Uid:49daddfb-451f-4dfe-8deb-6b014b258aae,Namespace:calico-system,Attempt:0,}" May 10 00:01:12.369818 containerd[1935]: time="2025-05-10T00:01:12.369643824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:12.370357 containerd[1935]: time="2025-05-10T00:01:12.369752052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:12.370357 containerd[1935]: time="2025-05-10T00:01:12.369826284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:12.370357 containerd[1935]: time="2025-05-10T00:01:12.370002708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:12.386086 kubelet[3394]: E0510 00:01:12.385421 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.386086 kubelet[3394]: W0510 00:01:12.385456 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.386086 kubelet[3394]: E0510 00:01:12.385503 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.388963 kubelet[3394]: E0510 00:01:12.388376 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.388963 kubelet[3394]: W0510 00:01:12.388408 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.388963 kubelet[3394]: E0510 00:01:12.388583 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.389405 kubelet[3394]: E0510 00:01:12.389344 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.389405 kubelet[3394]: W0510 00:01:12.389371 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.390827 kubelet[3394]: E0510 00:01:12.390589 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.392657 kubelet[3394]: E0510 00:01:12.391637 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.392657 kubelet[3394]: W0510 00:01:12.391669 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.393072 kubelet[3394]: E0510 00:01:12.393040 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.395821 kubelet[3394]: E0510 00:01:12.393541 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.395821 kubelet[3394]: W0510 00:01:12.393568 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.395821 kubelet[3394]: E0510 00:01:12.393636 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.397047 kubelet[3394]: E0510 00:01:12.396844 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.397047 kubelet[3394]: W0510 00:01:12.396875 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.397591 kubelet[3394]: E0510 00:01:12.397431 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.399836 kubelet[3394]: E0510 00:01:12.399536 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.399836 kubelet[3394]: W0510 00:01:12.399607 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.401816 kubelet[3394]: E0510 00:01:12.401237 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.401816 kubelet[3394]: E0510 00:01:12.401390 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.401816 kubelet[3394]: W0510 00:01:12.401409 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.401816 kubelet[3394]: E0510 00:01:12.401483 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.403867 kubelet[3394]: E0510 00:01:12.403694 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.403867 kubelet[3394]: W0510 00:01:12.403724 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.403867 kubelet[3394]: E0510 00:01:12.403826 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.406677 kubelet[3394]: E0510 00:01:12.405930 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.406677 kubelet[3394]: W0510 00:01:12.405968 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.406677 kubelet[3394]: E0510 00:01:12.406589 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.410465 kubelet[3394]: E0510 00:01:12.410355 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.410465 kubelet[3394]: W0510 00:01:12.410411 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.410965 kubelet[3394]: E0510 00:01:12.410660 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.412263 kubelet[3394]: E0510 00:01:12.411041 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.412263 kubelet[3394]: W0510 00:01:12.411203 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.412263 kubelet[3394]: E0510 00:01:12.412198 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.413838 kubelet[3394]: E0510 00:01:12.413760 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.413838 kubelet[3394]: W0510 00:01:12.413824 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.415004 kubelet[3394]: E0510 00:01:12.413993 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.415004 kubelet[3394]: E0510 00:01:12.414568 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.415004 kubelet[3394]: W0510 00:01:12.414713 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.415004 kubelet[3394]: E0510 00:01:12.414897 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.416563 kubelet[3394]: E0510 00:01:12.416514 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.416931 kubelet[3394]: W0510 00:01:12.416729 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.417586 kubelet[3394]: E0510 00:01:12.417511 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.418443 kubelet[3394]: E0510 00:01:12.418197 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.418443 kubelet[3394]: W0510 00:01:12.418231 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.419347 kubelet[3394]: E0510 00:01:12.419103 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.420193 kubelet[3394]: E0510 00:01:12.420138 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.424210 kubelet[3394]: W0510 00:01:12.420247 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.424210 kubelet[3394]: E0510 00:01:12.420495 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.424210 kubelet[3394]: E0510 00:01:12.421553 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.424210 kubelet[3394]: W0510 00:01:12.421578 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.424210 kubelet[3394]: E0510 00:01:12.421937 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.424210 kubelet[3394]: E0510 00:01:12.422750 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.424210 kubelet[3394]: W0510 00:01:12.422835 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.424210 kubelet[3394]: E0510 00:01:12.424040 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.425419 kubelet[3394]: W0510 00:01:12.424071 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.425419 kubelet[3394]: E0510 00:01:12.425294 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.425419 kubelet[3394]: W0510 00:01:12.425317 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.426070 kubelet[3394]: E0510 00:01:12.425909 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.426070 kubelet[3394]: W0510 00:01:12.425938 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.426707 kubelet[3394]: E0510 00:01:12.426561 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.428807 kubelet[3394]: E0510 00:01:12.427452 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.429944 kubelet[3394]: E0510 00:01:12.428094 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.430226 kubelet[3394]: W0510 00:01:12.430190 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.430600 kubelet[3394]: E0510 00:01:12.428139 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.431066 kubelet[3394]: E0510 00:01:12.430844 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.431066 kubelet[3394]: W0510 00:01:12.430957 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.431563 kubelet[3394]: E0510 00:01:12.431312 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.431563 kubelet[3394]: W0510 00:01:12.431361 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.431978 kubelet[3394]: E0510 00:01:12.431741 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.431978 kubelet[3394]: W0510 00:01:12.431810 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.437811 kubelet[3394]: E0510 00:01:12.432768 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.437811 kubelet[3394]: E0510 00:01:12.430864 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.437811 kubelet[3394]: E0510 00:01:12.428116 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.437811 kubelet[3394]: E0510 00:01:12.432972 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.437811 kubelet[3394]: E0510 00:01:12.433175 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.437811 kubelet[3394]: W0510 00:01:12.433194 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.437811 kubelet[3394]: E0510 00:01:12.433218 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.437811 kubelet[3394]: E0510 00:01:12.433512 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.442419 kubelet[3394]: E0510 00:01:12.442367 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.442419 kubelet[3394]: W0510 00:01:12.442404 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.442622 kubelet[3394]: E0510 00:01:12.442470 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.451408 kubelet[3394]: E0510 00:01:12.449289 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.451408 kubelet[3394]: W0510 00:01:12.449328 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.451408 kubelet[3394]: E0510 00:01:12.449399 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.453566 kubelet[3394]: E0510 00:01:12.453497 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.453566 kubelet[3394]: W0510 00:01:12.453561 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.453864 kubelet[3394]: E0510 00:01:12.453595 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.466837 kubelet[3394]: E0510 00:01:12.466122 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.466837 kubelet[3394]: W0510 00:01:12.466262 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.468289 kubelet[3394]: E0510 00:01:12.468082 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.468973 systemd[1]: Started cri-containerd-603f5a9ccce015edfb5d6bd300d99c202b509784c6b3607ef2f40aa18532cb27.scope - libcontainer container 603f5a9ccce015edfb5d6bd300d99c202b509784c6b3607ef2f40aa18532cb27. May 10 00:01:12.483046 kubelet[3394]: E0510 00:01:12.483008 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.483313 kubelet[3394]: W0510 00:01:12.483196 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.483313 kubelet[3394]: E0510 00:01:12.483236 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.490236 kubelet[3394]: E0510 00:01:12.490089 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.490236 kubelet[3394]: W0510 00:01:12.490148 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.490236 kubelet[3394]: E0510 00:01:12.490184 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.498209 kubelet[3394]: I0510 00:01:12.498124 3394 topology_manager.go:215] "Topology Admit Handler" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" podNamespace="calico-system" podName="csi-node-driver-g5xkj" May 10 00:01:12.499823 kubelet[3394]: E0510 00:01:12.498742 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:12.545793 containerd[1935]: time="2025-05-10T00:01:12.545705869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f66gn,Uid:c864c119-49f5-4e01-96de-0caf85ba913f,Namespace:calico-system,Attempt:0,}" May 10 00:01:12.568379 kubelet[3394]: E0510 00:01:12.568105 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.568379 kubelet[3394]: W0510 00:01:12.568151 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.568379 kubelet[3394]: E0510 00:01:12.568184 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.570809 kubelet[3394]: E0510 00:01:12.569494 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.571287 kubelet[3394]: W0510 00:01:12.571021 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.571287 kubelet[3394]: E0510 00:01:12.571073 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.571768 kubelet[3394]: E0510 00:01:12.571736 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.573885 kubelet[3394]: W0510 00:01:12.571934 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.573885 kubelet[3394]: E0510 00:01:12.571977 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.574648 kubelet[3394]: E0510 00:01:12.574615 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.575074 kubelet[3394]: W0510 00:01:12.574759 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.575074 kubelet[3394]: E0510 00:01:12.574817 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.575377 kubelet[3394]: E0510 00:01:12.575351 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.575482 kubelet[3394]: W0510 00:01:12.575459 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.576355 kubelet[3394]: E0510 00:01:12.575597 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.577643 kubelet[3394]: E0510 00:01:12.577029 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.577643 kubelet[3394]: W0510 00:01:12.577062 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.577643 kubelet[3394]: E0510 00:01:12.577093 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.578978 kubelet[3394]: E0510 00:01:12.578333 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.578978 kubelet[3394]: W0510 00:01:12.578365 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.578978 kubelet[3394]: E0510 00:01:12.578397 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.580554 kubelet[3394]: E0510 00:01:12.580512 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.580987 kubelet[3394]: W0510 00:01:12.580727 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.580987 kubelet[3394]: E0510 00:01:12.580767 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.582547 kubelet[3394]: E0510 00:01:12.582008 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.582547 kubelet[3394]: W0510 00:01:12.582041 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.582547 kubelet[3394]: E0510 00:01:12.582104 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.583824 kubelet[3394]: E0510 00:01:12.583258 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.584089 kubelet[3394]: W0510 00:01:12.584053 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.584663 kubelet[3394]: E0510 00:01:12.584191 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.586072 kubelet[3394]: E0510 00:01:12.586032 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.588031 kubelet[3394]: W0510 00:01:12.587647 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.588031 kubelet[3394]: E0510 00:01:12.587702 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.591807 kubelet[3394]: E0510 00:01:12.589663 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.591807 kubelet[3394]: W0510 00:01:12.589698 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.591807 kubelet[3394]: E0510 00:01:12.589729 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.592455 kubelet[3394]: E0510 00:01:12.592199 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.592455 kubelet[3394]: W0510 00:01:12.592231 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.592455 kubelet[3394]: E0510 00:01:12.592262 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.592891 kubelet[3394]: E0510 00:01:12.592749 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.593005 kubelet[3394]: W0510 00:01:12.592979 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.593112 kubelet[3394]: E0510 00:01:12.593089 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.594688 kubelet[3394]: E0510 00:01:12.594650 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.594982 kubelet[3394]: W0510 00:01:12.594950 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.596093 kubelet[3394]: E0510 00:01:12.595100 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.598805 kubelet[3394]: E0510 00:01:12.598040 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.598805 kubelet[3394]: W0510 00:01:12.598078 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.598805 kubelet[3394]: E0510 00:01:12.598110 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.601339 kubelet[3394]: E0510 00:01:12.599945 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.601339 kubelet[3394]: W0510 00:01:12.599982 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.601339 kubelet[3394]: E0510 00:01:12.600029 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.601339 kubelet[3394]: E0510 00:01:12.601099 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.601339 kubelet[3394]: W0510 00:01:12.601128 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.601339 kubelet[3394]: E0510 00:01:12.601159 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.603509 kubelet[3394]: E0510 00:01:12.603200 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.603509 kubelet[3394]: W0510 00:01:12.603234 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.603509 kubelet[3394]: E0510 00:01:12.603267 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.604941 kubelet[3394]: E0510 00:01:12.604898 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.605224 kubelet[3394]: W0510 00:01:12.605107 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.605701 kubelet[3394]: E0510 00:01:12.605659 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.610828 kubelet[3394]: E0510 00:01:12.609263 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.610828 kubelet[3394]: W0510 00:01:12.609296 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.610828 kubelet[3394]: E0510 00:01:12.609329 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.610828 kubelet[3394]: I0510 00:01:12.609390 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q76cd\" (UniqueName: \"kubernetes.io/projected/99d7a31f-c874-41d9-9abb-62b4e619c443-kube-api-access-q76cd\") pod \"csi-node-driver-g5xkj\" (UID: \"99d7a31f-c874-41d9-9abb-62b4e619c443\") " pod="calico-system/csi-node-driver-g5xkj" May 10 00:01:12.611382 kubelet[3394]: E0510 00:01:12.611259 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.611382 kubelet[3394]: W0510 00:01:12.611293 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.611382 kubelet[3394]: E0510 00:01:12.611340 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.612753 kubelet[3394]: E0510 00:01:12.612692 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.612985 kubelet[3394]: W0510 00:01:12.612907 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.614892 kubelet[3394]: E0510 00:01:12.613073 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.614892 kubelet[3394]: I0510 00:01:12.613135 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/99d7a31f-c874-41d9-9abb-62b4e619c443-varrun\") pod \"csi-node-driver-g5xkj\" (UID: \"99d7a31f-c874-41d9-9abb-62b4e619c443\") " pod="calico-system/csi-node-driver-g5xkj" May 10 00:01:12.614892 kubelet[3394]: E0510 00:01:12.614356 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.614892 kubelet[3394]: W0510 00:01:12.614387 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.614892 kubelet[3394]: E0510 00:01:12.614621 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.616421 kubelet[3394]: E0510 00:01:12.616060 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.616421 kubelet[3394]: W0510 00:01:12.616133 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.616595 kubelet[3394]: E0510 00:01:12.616482 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.619276 kubelet[3394]: E0510 00:01:12.617723 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.619276 kubelet[3394]: W0510 00:01:12.617759 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.619276 kubelet[3394]: E0510 00:01:12.617828 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.619276 kubelet[3394]: E0510 00:01:12.618849 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.619276 kubelet[3394]: W0510 00:01:12.618889 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.619276 kubelet[3394]: E0510 00:01:12.618918 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.621072 kubelet[3394]: I0510 00:01:12.620857 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99d7a31f-c874-41d9-9abb-62b4e619c443-kubelet-dir\") pod \"csi-node-driver-g5xkj\" (UID: \"99d7a31f-c874-41d9-9abb-62b4e619c443\") " pod="calico-system/csi-node-driver-g5xkj" May 10 00:01:12.621242 kubelet[3394]: E0510 00:01:12.621101 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.621242 kubelet[3394]: W0510 00:01:12.621120 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.621242 kubelet[3394]: E0510 00:01:12.621166 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.624841 containerd[1935]: time="2025-05-10T00:01:12.622507981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:12.625046 kubelet[3394]: E0510 00:01:12.623084 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.625046 kubelet[3394]: W0510 00:01:12.623114 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.625046 kubelet[3394]: E0510 00:01:12.623168 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.625763 kubelet[3394]: E0510 00:01:12.625181 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.625763 kubelet[3394]: W0510 00:01:12.625208 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.625763 kubelet[3394]: E0510 00:01:12.625236 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.625763 kubelet[3394]: I0510 00:01:12.625282 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99d7a31f-c874-41d9-9abb-62b4e619c443-socket-dir\") pod \"csi-node-driver-g5xkj\" (UID: \"99d7a31f-c874-41d9-9abb-62b4e619c443\") " pod="calico-system/csi-node-driver-g5xkj" May 10 00:01:12.629663 kubelet[3394]: E0510 00:01:12.629303 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.629663 kubelet[3394]: W0510 00:01:12.629339 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.629663 kubelet[3394]: E0510 00:01:12.629374 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.629663 kubelet[3394]: I0510 00:01:12.629422 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99d7a31f-c874-41d9-9abb-62b4e619c443-registration-dir\") pod \"csi-node-driver-g5xkj\" (UID: \"99d7a31f-c874-41d9-9abb-62b4e619c443\") " pod="calico-system/csi-node-driver-g5xkj" May 10 00:01:12.631147 containerd[1935]: time="2025-05-10T00:01:12.622600273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:12.631147 containerd[1935]: time="2025-05-10T00:01:12.622629553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:12.631147 containerd[1935]: time="2025-05-10T00:01:12.622813669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:12.634364 kubelet[3394]: E0510 00:01:12.633975 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.634364 kubelet[3394]: W0510 00:01:12.634027 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.635316 kubelet[3394]: E0510 00:01:12.634827 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.636962 kubelet[3394]: E0510 00:01:12.636663 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.636962 kubelet[3394]: W0510 00:01:12.636699 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.636962 kubelet[3394]: E0510 00:01:12.636744 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.637933 kubelet[3394]: E0510 00:01:12.637301 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.637933 kubelet[3394]: W0510 00:01:12.637325 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.637933 kubelet[3394]: E0510 00:01:12.637352 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.638096 kubelet[3394]: E0510 00:01:12.637998 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.638096 kubelet[3394]: W0510 00:01:12.638044 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.638096 kubelet[3394]: E0510 00:01:12.638073 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.664722 containerd[1935]: time="2025-05-10T00:01:12.664651118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf5c99ddf-6wxd6,Uid:49daddfb-451f-4dfe-8deb-6b014b258aae,Namespace:calico-system,Attempt:0,} returns sandbox id \"603f5a9ccce015edfb5d6bd300d99c202b509784c6b3607ef2f40aa18532cb27\"" May 10 00:01:12.672033 containerd[1935]: time="2025-05-10T00:01:12.671957918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 00:01:12.702110 systemd[1]: Started cri-containerd-6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499.scope - libcontainer container 6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499. May 10 00:01:12.731617 kubelet[3394]: E0510 00:01:12.731571 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.731617 kubelet[3394]: W0510 00:01:12.731605 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.732335 kubelet[3394]: E0510 00:01:12.731635 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.733750 kubelet[3394]: E0510 00:01:12.733696 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.733750 kubelet[3394]: W0510 00:01:12.733734 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.733750 kubelet[3394]: E0510 00:01:12.733798 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.734330 kubelet[3394]: E0510 00:01:12.734289 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.734330 kubelet[3394]: W0510 00:01:12.734321 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.734488 kubelet[3394]: E0510 00:01:12.734389 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.735495 kubelet[3394]: E0510 00:01:12.735446 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.735495 kubelet[3394]: W0510 00:01:12.735483 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.735679 kubelet[3394]: E0510 00:01:12.735524 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.736855 kubelet[3394]: E0510 00:01:12.736798 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.736947 kubelet[3394]: W0510 00:01:12.736876 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.737931 kubelet[3394]: E0510 00:01:12.737871 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.739195 kubelet[3394]: E0510 00:01:12.739147 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.739195 kubelet[3394]: W0510 00:01:12.739184 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.739529 kubelet[3394]: E0510 00:01:12.739489 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.740137 kubelet[3394]: E0510 00:01:12.739701 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.740137 kubelet[3394]: W0510 00:01:12.739731 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.740867 kubelet[3394]: E0510 00:01:12.740162 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.740867 kubelet[3394]: W0510 00:01:12.740181 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.740867 kubelet[3394]: E0510 00:01:12.740335 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.740867 kubelet[3394]: E0510 00:01:12.740428 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.744962 kubelet[3394]: E0510 00:01:12.744898 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.744962 kubelet[3394]: W0510 00:01:12.744936 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.745182 kubelet[3394]: E0510 00:01:12.745068 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.745524 kubelet[3394]: E0510 00:01:12.745484 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.745524 kubelet[3394]: W0510 00:01:12.745515 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.745692 kubelet[3394]: E0510 00:01:12.745628 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.745988 kubelet[3394]: E0510 00:01:12.745951 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.745988 kubelet[3394]: W0510 00:01:12.745980 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.746168 kubelet[3394]: E0510 00:01:12.746136 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.748277 kubelet[3394]: E0510 00:01:12.747813 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.748277 kubelet[3394]: W0510 00:01:12.747853 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.748277 kubelet[3394]: E0510 00:01:12.748051 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.748930 kubelet[3394]: E0510 00:01:12.748888 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.748930 kubelet[3394]: W0510 00:01:12.748920 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.749722 kubelet[3394]: E0510 00:01:12.749125 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.751411 kubelet[3394]: E0510 00:01:12.751362 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.751411 kubelet[3394]: W0510 00:01:12.751399 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.752849 kubelet[3394]: E0510 00:01:12.751728 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.754567 kubelet[3394]: E0510 00:01:12.754520 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.754567 kubelet[3394]: W0510 00:01:12.754557 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.754759 kubelet[3394]: E0510 00:01:12.754689 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.755947 kubelet[3394]: E0510 00:01:12.755086 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.755947 kubelet[3394]: W0510 00:01:12.755106 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.755947 kubelet[3394]: E0510 00:01:12.755218 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.755947 kubelet[3394]: E0510 00:01:12.755465 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.755947 kubelet[3394]: W0510 00:01:12.755480 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.755947 kubelet[3394]: E0510 00:01:12.755601 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.755947 kubelet[3394]: E0510 00:01:12.755871 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.755947 kubelet[3394]: W0510 00:01:12.755890 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.756489 kubelet[3394]: E0510 00:01:12.756114 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.756489 kubelet[3394]: E0510 00:01:12.756367 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.756489 kubelet[3394]: W0510 00:01:12.756384 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.757822 kubelet[3394]: E0510 00:01:12.756894 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.757822 kubelet[3394]: E0510 00:01:12.757218 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.757822 kubelet[3394]: W0510 00:01:12.757240 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.757822 kubelet[3394]: E0510 00:01:12.757548 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.757822 kubelet[3394]: E0510 00:01:12.757633 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.757822 kubelet[3394]: W0510 00:01:12.757650 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.757822 kubelet[3394]: E0510 00:01:12.757808 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.758825 kubelet[3394]: E0510 00:01:12.758739 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.758825 kubelet[3394]: W0510 00:01:12.758796 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.758825 kubelet[3394]: E0510 00:01:12.758871 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.759152 kubelet[3394]: E0510 00:01:12.759118 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.759152 kubelet[3394]: W0510 00:01:12.759134 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.759152 kubelet[3394]: E0510 00:01:12.759497 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.759152 kubelet[3394]: W0510 00:01:12.759515 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.759152 kubelet[3394]: E0510 00:01:12.759539 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.759152 kubelet[3394]: E0510 00:01:12.759577 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.760230 kubelet[3394]: E0510 00:01:12.759995 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.760230 kubelet[3394]: W0510 00:01:12.760031 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.760230 kubelet[3394]: E0510 00:01:12.760057 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.798143 kubelet[3394]: E0510 00:01:12.798061 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:12.798143 kubelet[3394]: W0510 00:01:12.798120 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:12.799436 kubelet[3394]: E0510 00:01:12.798154 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:12.854145 containerd[1935]: time="2025-05-10T00:01:12.854080827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f66gn,Uid:c864c119-49f5-4e01-96de-0caf85ba913f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499\"" May 10 00:01:13.195358 systemd[1]: run-containerd-runc-k8s.io-603f5a9ccce015edfb5d6bd300d99c202b509784c6b3607ef2f40aa18532cb27-runc.hfh4g1.mount: Deactivated successfully. May 10 00:01:14.225187 kubelet[3394]: E0510 00:01:14.225121 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:14.791660 containerd[1935]: time="2025-05-10T00:01:14.791543704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:14.793935 containerd[1935]: time="2025-05-10T00:01:14.793554532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 10 00:01:14.795050 containerd[1935]: time="2025-05-10T00:01:14.794952724Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:14.798708 containerd[1935]: time="2025-05-10T00:01:14.798625036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:14.800411 containerd[1935]: time="2025-05-10T00:01:14.800200348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.128180018s" May 10 00:01:14.800411 containerd[1935]: time="2025-05-10T00:01:14.800257228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 10 00:01:14.803930 containerd[1935]: time="2025-05-10T00:01:14.803256136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 00:01:14.829910 containerd[1935]: time="2025-05-10T00:01:14.829854040Z" level=info msg="CreateContainer within sandbox \"603f5a9ccce015edfb5d6bd300d99c202b509784c6b3607ef2f40aa18532cb27\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 00:01:14.850765 containerd[1935]: time="2025-05-10T00:01:14.850686281Z" level=info msg="CreateContainer within sandbox \"603f5a9ccce015edfb5d6bd300d99c202b509784c6b3607ef2f40aa18532cb27\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"76020a5e68199c78c5e0ba25ee35dbfa85d1d598d3258fb12242ee67c37e79ae\"" May 10 00:01:14.851521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407490790.mount: Deactivated successfully. May 10 00:01:14.855586 containerd[1935]: time="2025-05-10T00:01:14.855541049Z" level=info msg="StartContainer for \"76020a5e68199c78c5e0ba25ee35dbfa85d1d598d3258fb12242ee67c37e79ae\"" May 10 00:01:14.916108 systemd[1]: Started cri-containerd-76020a5e68199c78c5e0ba25ee35dbfa85d1d598d3258fb12242ee67c37e79ae.scope - libcontainer container 76020a5e68199c78c5e0ba25ee35dbfa85d1d598d3258fb12242ee67c37e79ae. May 10 00:01:15.000621 containerd[1935]: time="2025-05-10T00:01:15.000554593Z" level=info msg="StartContainer for \"76020a5e68199c78c5e0ba25ee35dbfa85d1d598d3258fb12242ee67c37e79ae\" returns successfully" May 10 00:01:15.426399 kubelet[3394]: I0510 00:01:15.426293 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bf5c99ddf-6wxd6" podStartSLOduration=2.293373853 podStartE2EDuration="4.426273435s" podCreationTimestamp="2025-05-10 00:01:11 +0000 UTC" firstStartedPulling="2025-05-10 00:01:12.66919865 +0000 UTC m=+23.739951251" lastFinishedPulling="2025-05-10 00:01:14.802098148 +0000 UTC m=+25.872850833" observedRunningTime="2025-05-10 00:01:15.425864283 +0000 UTC m=+26.496616968" watchObservedRunningTime="2025-05-10 00:01:15.426273435 +0000 UTC m=+26.497026036" May 10 00:01:15.433050 kubelet[3394]: E0510 00:01:15.432989 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.433050 kubelet[3394]: W0510 00:01:15.433034 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.433232 kubelet[3394]: E0510 00:01:15.433070 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.434159 kubelet[3394]: E0510 00:01:15.434119 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.434159 kubelet[3394]: W0510 00:01:15.434154 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.434432 kubelet[3394]: E0510 00:01:15.434205 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.435580 kubelet[3394]: E0510 00:01:15.435501 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.435900 kubelet[3394]: W0510 00:01:15.435533 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.435900 kubelet[3394]: E0510 00:01:15.435752 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.436653 kubelet[3394]: E0510 00:01:15.436470 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.436653 kubelet[3394]: W0510 00:01:15.436496 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.436653 kubelet[3394]: E0510 00:01:15.436520 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.436980 kubelet[3394]: E0510 00:01:15.436932 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.437044 kubelet[3394]: W0510 00:01:15.436990 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.437044 kubelet[3394]: E0510 00:01:15.437015 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.437434 kubelet[3394]: E0510 00:01:15.437407 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.437499 kubelet[3394]: W0510 00:01:15.437433 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.437613 kubelet[3394]: E0510 00:01:15.437455 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.438031 kubelet[3394]: E0510 00:01:15.437913 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.438031 kubelet[3394]: W0510 00:01:15.437959 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.438197 kubelet[3394]: E0510 00:01:15.437986 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.438852 kubelet[3394]: E0510 00:01:15.438807 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.438852 kubelet[3394]: W0510 00:01:15.438842 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.439075 kubelet[3394]: E0510 00:01:15.438871 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.439462 kubelet[3394]: E0510 00:01:15.439425 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.439565 kubelet[3394]: W0510 00:01:15.439504 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.439565 kubelet[3394]: E0510 00:01:15.439532 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.440272 kubelet[3394]: E0510 00:01:15.440227 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.440272 kubelet[3394]: W0510 00:01:15.440269 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.440666 kubelet[3394]: E0510 00:01:15.440299 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.441073 kubelet[3394]: E0510 00:01:15.441036 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.441150 kubelet[3394]: W0510 00:01:15.441102 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.441150 kubelet[3394]: E0510 00:01:15.441131 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.441502 kubelet[3394]: E0510 00:01:15.441476 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.441588 kubelet[3394]: W0510 00:01:15.441501 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.441588 kubelet[3394]: E0510 00:01:15.441522 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.442053 kubelet[3394]: E0510 00:01:15.442011 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.442122 kubelet[3394]: W0510 00:01:15.442052 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.442122 kubelet[3394]: E0510 00:01:15.442075 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.442472 kubelet[3394]: E0510 00:01:15.442446 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.442535 kubelet[3394]: W0510 00:01:15.442472 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.442535 kubelet[3394]: E0510 00:01:15.442498 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.442962 kubelet[3394]: E0510 00:01:15.442935 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.443024 kubelet[3394]: W0510 00:01:15.442961 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.443077 kubelet[3394]: E0510 00:01:15.442985 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.466575 kubelet[3394]: E0510 00:01:15.466363 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.466575 kubelet[3394]: W0510 00:01:15.466393 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.466575 kubelet[3394]: E0510 00:01:15.466419 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.466963 kubelet[3394]: E0510 00:01:15.466927 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.466963 kubelet[3394]: W0510 00:01:15.466958 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.467252 kubelet[3394]: E0510 00:01:15.466996 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.467433 kubelet[3394]: E0510 00:01:15.467404 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.467532 kubelet[3394]: W0510 00:01:15.467432 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.467532 kubelet[3394]: E0510 00:01:15.467473 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.468202 kubelet[3394]: E0510 00:01:15.467962 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.468202 kubelet[3394]: W0510 00:01:15.468007 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.468202 kubelet[3394]: E0510 00:01:15.468055 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.468575 kubelet[3394]: E0510 00:01:15.468430 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.468575 kubelet[3394]: W0510 00:01:15.468460 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.469151 kubelet[3394]: E0510 00:01:15.468573 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.469151 kubelet[3394]: E0510 00:01:15.468975 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.469151 kubelet[3394]: W0510 00:01:15.468996 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.469932 kubelet[3394]: E0510 00:01:15.469166 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.469932 kubelet[3394]: E0510 00:01:15.469403 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.469932 kubelet[3394]: W0510 00:01:15.469422 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.469932 kubelet[3394]: E0510 00:01:15.469722 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.469932 kubelet[3394]: E0510 00:01:15.469744 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.469932 kubelet[3394]: W0510 00:01:15.469759 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.469932 kubelet[3394]: E0510 00:01:15.469851 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.471854 kubelet[3394]: E0510 00:01:15.470248 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.471854 kubelet[3394]: W0510 00:01:15.470290 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.471854 kubelet[3394]: E0510 00:01:15.470390 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.471854 kubelet[3394]: E0510 00:01:15.470877 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.471854 kubelet[3394]: W0510 00:01:15.470899 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.471854 kubelet[3394]: E0510 00:01:15.470935 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.471854 kubelet[3394]: E0510 00:01:15.471315 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.471854 kubelet[3394]: W0510 00:01:15.471337 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.471854 kubelet[3394]: E0510 00:01:15.471361 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.473093 kubelet[3394]: E0510 00:01:15.472520 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.473093 kubelet[3394]: W0510 00:01:15.472553 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.473093 kubelet[3394]: E0510 00:01:15.472664 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.473436 kubelet[3394]: E0510 00:01:15.473403 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.473730 kubelet[3394]: W0510 00:01:15.473435 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.473730 kubelet[3394]: E0510 00:01:15.473476 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.474249 kubelet[3394]: E0510 00:01:15.474128 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.474249 kubelet[3394]: W0510 00:01:15.474157 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.474249 kubelet[3394]: E0510 00:01:15.474189 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.474560 kubelet[3394]: E0510 00:01:15.474530 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.474641 kubelet[3394]: W0510 00:01:15.474559 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.474970 kubelet[3394]: E0510 00:01:15.474728 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.475158 kubelet[3394]: E0510 00:01:15.475130 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.475220 kubelet[3394]: W0510 00:01:15.475157 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.475220 kubelet[3394]: E0510 00:01:15.475185 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.475661 kubelet[3394]: E0510 00:01:15.475632 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.475762 kubelet[3394]: W0510 00:01:15.475660 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.475762 kubelet[3394]: E0510 00:01:15.475683 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:15.476481 kubelet[3394]: E0510 00:01:15.476451 3394 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:01:15.476481 kubelet[3394]: W0510 00:01:15.476479 3394 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:01:15.476605 kubelet[3394]: E0510 00:01:15.476504 3394 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:01:16.058857 containerd[1935]: time="2025-05-10T00:01:16.058553487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:16.060481 containerd[1935]: time="2025-05-10T00:01:16.060423255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 10 00:01:16.063187 containerd[1935]: time="2025-05-10T00:01:16.063064011Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:16.068018 containerd[1935]: time="2025-05-10T00:01:16.067901163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:16.069861 containerd[1935]: time="2025-05-10T00:01:16.069276747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.265945671s" May 10 00:01:16.069861 containerd[1935]: time="2025-05-10T00:01:16.069340947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 10 00:01:16.074822 containerd[1935]: time="2025-05-10T00:01:16.074665623Z" level=info msg="CreateContainer within sandbox \"6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 00:01:16.103882 containerd[1935]: time="2025-05-10T00:01:16.103760727Z" level=info msg="CreateContainer within sandbox \"6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e\"" May 10 00:01:16.104644 containerd[1935]: time="2025-05-10T00:01:16.104579355Z" level=info msg="StartContainer for \"acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e\"" May 10 00:01:16.167096 systemd[1]: Started cri-containerd-acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e.scope - libcontainer container acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e. May 10 00:01:16.219569 containerd[1935]: time="2025-05-10T00:01:16.219467271Z" level=info msg="StartContainer for \"acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e\" returns successfully" May 10 00:01:16.224871 kubelet[3394]: E0510 00:01:16.224805 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:16.249532 systemd[1]: cri-containerd-acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e.scope: Deactivated successfully. May 10 00:01:16.414475 kubelet[3394]: I0510 00:01:16.414330 3394 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:16.482038 containerd[1935]: time="2025-05-10T00:01:16.481858517Z" level=info msg="shim disconnected" id=acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e namespace=k8s.io May 10 00:01:16.482038 containerd[1935]: time="2025-05-10T00:01:16.481932653Z" level=warning msg="cleaning up after shim disconnected" id=acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e namespace=k8s.io May 10 00:01:16.482038 containerd[1935]: time="2025-05-10T00:01:16.481952945Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:01:16.813866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acef41b1a26aa28f7c2d13f4bed016fa4d564874bd75029b564984f6ffa3be6e-rootfs.mount: Deactivated successfully. May 10 00:01:17.422339 containerd[1935]: time="2025-05-10T00:01:17.422278769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 00:01:18.225248 kubelet[3394]: E0510 00:01:18.225059 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:20.225723 kubelet[3394]: E0510 00:01:20.225554 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:21.355208 containerd[1935]: time="2025-05-10T00:01:21.355153521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:21.357050 containerd[1935]: time="2025-05-10T00:01:21.356969073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 10 00:01:21.357812 containerd[1935]: time="2025-05-10T00:01:21.357447717Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:21.366180 containerd[1935]: time="2025-05-10T00:01:21.365894253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:21.371042 containerd[1935]: time="2025-05-10T00:01:21.370974225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.948594212s" May 10 00:01:21.371351 containerd[1935]: time="2025-05-10T00:01:21.371043549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 10 00:01:21.376327 containerd[1935]: time="2025-05-10T00:01:21.376236693Z" level=info msg="CreateContainer within sandbox \"6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 00:01:21.403491 containerd[1935]: time="2025-05-10T00:01:21.403409817Z" level=info msg="CreateContainer within sandbox \"6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28\"" May 10 00:01:21.404873 containerd[1935]: time="2025-05-10T00:01:21.404475633Z" level=info msg="StartContainer for \"64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28\"" May 10 00:01:21.467122 systemd[1]: Started cri-containerd-64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28.scope - libcontainer container 64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28. May 10 00:01:21.521554 containerd[1935]: time="2025-05-10T00:01:21.521472934Z" level=info msg="StartContainer for \"64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28\" returns successfully" May 10 00:01:22.226012 kubelet[3394]: E0510 00:01:22.225940 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:22.413644 systemd[1]: cri-containerd-64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28.scope: Deactivated successfully. May 10 00:01:22.467122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28-rootfs.mount: Deactivated successfully. May 10 00:01:22.513947 kubelet[3394]: I0510 00:01:22.513803 3394 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:01:22.566902 kubelet[3394]: I0510 00:01:22.566026 3394 topology_manager.go:215] "Topology Admit Handler" podUID="462d92e2-f6b9-4559-82e4-3c84f91c0749" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7496j" May 10 00:01:22.579420 kubelet[3394]: I0510 00:01:22.578749 3394 topology_manager.go:215] "Topology Admit Handler" podUID="4bfdbcc0-fe8b-4871-a536-daaca6c5cc23" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nnp79" May 10 00:01:22.586691 kubelet[3394]: W0510 00:01:22.586217 3394 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-167" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-167' and this object May 10 00:01:22.586691 kubelet[3394]: E0510 00:01:22.586276 3394 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-167" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-167' and this object May 10 00:01:22.588558 systemd[1]: Created slice kubepods-burstable-pod462d92e2_f6b9_4559_82e4_3c84f91c0749.slice - libcontainer container kubepods-burstable-pod462d92e2_f6b9_4559_82e4_3c84f91c0749.slice. May 10 00:01:22.600514 kubelet[3394]: I0510 00:01:22.597900 3394 topology_manager.go:215] "Topology Admit Handler" podUID="ec53ba6d-6936-42e6-ac4d-b9640fa50bea" podNamespace="calico-system" podName="calico-kube-controllers-56879d8f96-w2m6s" May 10 00:01:22.605708 kubelet[3394]: I0510 00:01:22.603658 3394 topology_manager.go:215] "Topology Admit Handler" podUID="872abefe-e998-4710-ae06-61521f1057d1" podNamespace="calico-apiserver" podName="calico-apiserver-8444c9d58d-t6jjx" May 10 00:01:22.611090 kubelet[3394]: I0510 00:01:22.609492 3394 topology_manager.go:215] "Topology Admit Handler" podUID="6d08f91a-f116-4037-bee3-3fae03414811" podNamespace="calico-apiserver" podName="calico-apiserver-8444c9d58d-q5m7h" May 10 00:01:22.623597 systemd[1]: Created slice kubepods-burstable-pod4bfdbcc0_fe8b_4871_a536_daaca6c5cc23.slice - libcontainer container kubepods-burstable-pod4bfdbcc0_fe8b_4871_a536_daaca6c5cc23.slice. May 10 00:01:22.628502 kubelet[3394]: I0510 00:01:22.624579 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/462d92e2-f6b9-4559-82e4-3c84f91c0749-config-volume\") pod \"coredns-7db6d8ff4d-7496j\" (UID: \"462d92e2-f6b9-4559-82e4-3c84f91c0749\") " pod="kube-system/coredns-7db6d8ff4d-7496j" May 10 00:01:22.628502 kubelet[3394]: I0510 00:01:22.624645 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxf6p\" (UniqueName: \"kubernetes.io/projected/4bfdbcc0-fe8b-4871-a536-daaca6c5cc23-kube-api-access-jxf6p\") pod \"coredns-7db6d8ff4d-nnp79\" (UID: \"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23\") " pod="kube-system/coredns-7db6d8ff4d-nnp79" May 10 00:01:22.628502 kubelet[3394]: I0510 00:01:22.624705 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec53ba6d-6936-42e6-ac4d-b9640fa50bea-tigera-ca-bundle\") pod \"calico-kube-controllers-56879d8f96-w2m6s\" (UID: \"ec53ba6d-6936-42e6-ac4d-b9640fa50bea\") " pod="calico-system/calico-kube-controllers-56879d8f96-w2m6s" May 10 00:01:22.628502 kubelet[3394]: I0510 00:01:22.624795 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxdx\" (UniqueName: \"kubernetes.io/projected/ec53ba6d-6936-42e6-ac4d-b9640fa50bea-kube-api-access-ztxdx\") pod \"calico-kube-controllers-56879d8f96-w2m6s\" (UID: \"ec53ba6d-6936-42e6-ac4d-b9640fa50bea\") " pod="calico-system/calico-kube-controllers-56879d8f96-w2m6s" May 10 00:01:22.628502 kubelet[3394]: I0510 00:01:22.624858 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6pd6\" (UniqueName: \"kubernetes.io/projected/462d92e2-f6b9-4559-82e4-3c84f91c0749-kube-api-access-b6pd6\") pod \"coredns-7db6d8ff4d-7496j\" (UID: \"462d92e2-f6b9-4559-82e4-3c84f91c0749\") " pod="kube-system/coredns-7db6d8ff4d-7496j" May 10 00:01:22.628919 kubelet[3394]: I0510 00:01:22.624911 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfdbcc0-fe8b-4871-a536-daaca6c5cc23-config-volume\") pod \"coredns-7db6d8ff4d-nnp79\" (UID: \"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23\") " pod="kube-system/coredns-7db6d8ff4d-nnp79" May 10 00:01:22.628919 kubelet[3394]: W0510 00:01:22.625419 3394 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-18-167" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-18-167' and this object May 10 00:01:22.628919 kubelet[3394]: E0510 00:01:22.625469 3394 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-18-167" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-18-167' and this object May 10 00:01:22.628919 kubelet[3394]: W0510 00:01:22.625542 3394 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-167" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-18-167' and this object May 10 00:01:22.628919 kubelet[3394]: E0510 00:01:22.625568 3394 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-167" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-18-167' and this object May 10 00:01:22.663451 systemd[1]: Created slice kubepods-besteffort-podec53ba6d_6936_42e6_ac4d_b9640fa50bea.slice - libcontainer container kubepods-besteffort-podec53ba6d_6936_42e6_ac4d_b9640fa50bea.slice. May 10 00:01:22.673292 systemd[1]: Created slice kubepods-besteffort-pod872abefe_e998_4710_ae06_61521f1057d1.slice - libcontainer container kubepods-besteffort-pod872abefe_e998_4710_ae06_61521f1057d1.slice. May 10 00:01:22.695661 systemd[1]: Created slice kubepods-besteffort-pod6d08f91a_f116_4037_bee3_3fae03414811.slice - libcontainer container kubepods-besteffort-pod6d08f91a_f116_4037_bee3_3fae03414811.slice. May 10 00:01:22.725566 kubelet[3394]: I0510 00:01:22.725513 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/872abefe-e998-4710-ae06-61521f1057d1-calico-apiserver-certs\") pod \"calico-apiserver-8444c9d58d-t6jjx\" (UID: \"872abefe-e998-4710-ae06-61521f1057d1\") " pod="calico-apiserver/calico-apiserver-8444c9d58d-t6jjx" May 10 00:01:22.725566 kubelet[3394]: I0510 00:01:22.725580 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6d08f91a-f116-4037-bee3-3fae03414811-calico-apiserver-certs\") pod \"calico-apiserver-8444c9d58d-q5m7h\" (UID: \"6d08f91a-f116-4037-bee3-3fae03414811\") " pod="calico-apiserver/calico-apiserver-8444c9d58d-q5m7h" May 10 00:01:22.725834 kubelet[3394]: I0510 00:01:22.725646 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgmz\" (UniqueName: \"kubernetes.io/projected/872abefe-e998-4710-ae06-61521f1057d1-kube-api-access-2vgmz\") pod \"calico-apiserver-8444c9d58d-t6jjx\" (UID: \"872abefe-e998-4710-ae06-61521f1057d1\") " pod="calico-apiserver/calico-apiserver-8444c9d58d-t6jjx" May 10 00:01:22.725834 kubelet[3394]: I0510 00:01:22.725743 3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dhcj\" (UniqueName: \"kubernetes.io/projected/6d08f91a-f116-4037-bee3-3fae03414811-kube-api-access-4dhcj\") pod \"calico-apiserver-8444c9d58d-q5m7h\" (UID: \"6d08f91a-f116-4037-bee3-3fae03414811\") " pod="calico-apiserver/calico-apiserver-8444c9d58d-q5m7h" May 10 00:01:22.981640 containerd[1935]: time="2025-05-10T00:01:22.981571957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56879d8f96-w2m6s,Uid:ec53ba6d-6936-42e6-ac4d-b9640fa50bea,Namespace:calico-system,Attempt:0,}" May 10 00:01:23.431689 containerd[1935]: time="2025-05-10T00:01:23.431181383Z" level=info msg="shim disconnected" id=64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28 namespace=k8s.io May 10 00:01:23.431689 containerd[1935]: time="2025-05-10T00:01:23.431301107Z" level=warning msg="cleaning up after shim disconnected" id=64eda5b036775217cb70a633ef5a0b1d092c5c962d68403ba6ac6a5b37e87a28 namespace=k8s.io May 10 00:01:23.431689 containerd[1935]: time="2025-05-10T00:01:23.431323787Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:01:23.541088 containerd[1935]: time="2025-05-10T00:01:23.540946692Z" level=error msg="Failed to destroy network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:23.544296 containerd[1935]: time="2025-05-10T00:01:23.544220484Z" level=error msg="encountered an error cleaning up failed sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:23.544532 containerd[1935]: time="2025-05-10T00:01:23.544321332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56879d8f96-w2m6s,Uid:ec53ba6d-6936-42e6-ac4d-b9640fa50bea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:23.544714 kubelet[3394]: E0510 00:01:23.544604 3394 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:23.544714 kubelet[3394]: E0510 00:01:23.544693 3394 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56879d8f96-w2m6s" May 10 00:01:23.545306 kubelet[3394]: E0510 00:01:23.544726 3394 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56879d8f96-w2m6s" May 10 00:01:23.545306 kubelet[3394]: E0510 00:01:23.544813 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56879d8f96-w2m6s_calico-system(ec53ba6d-6936-42e6-ac4d-b9640fa50bea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56879d8f96-w2m6s_calico-system(ec53ba6d-6936-42e6-ac4d-b9640fa50bea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56879d8f96-w2m6s" podUID="ec53ba6d-6936-42e6-ac4d-b9640fa50bea" May 10 00:01:23.547996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933-shm.mount: Deactivated successfully. May 10 00:01:23.727720 kubelet[3394]: E0510 00:01:23.727660 3394 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 10 00:01:23.728045 kubelet[3394]: E0510 00:01:23.727660 3394 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 10 00:01:23.728045 kubelet[3394]: E0510 00:01:23.727828 3394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4bfdbcc0-fe8b-4871-a536-daaca6c5cc23-config-volume podName:4bfdbcc0-fe8b-4871-a536-daaca6c5cc23 nodeName:}" failed. No retries permitted until 2025-05-10 00:01:24.227764553 +0000 UTC m=+35.298517142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4bfdbcc0-fe8b-4871-a536-daaca6c5cc23-config-volume") pod "coredns-7db6d8ff4d-nnp79" (UID: "4bfdbcc0-fe8b-4871-a536-daaca6c5cc23") : failed to sync configmap cache: timed out waiting for the condition May 10 00:01:23.728289 kubelet[3394]: E0510 00:01:23.727870 3394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/462d92e2-f6b9-4559-82e4-3c84f91c0749-config-volume podName:462d92e2-f6b9-4559-82e4-3c84f91c0749 nodeName:}" failed. No retries permitted until 2025-05-10 00:01:24.227852741 +0000 UTC m=+35.298605342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/462d92e2-f6b9-4559-82e4-3c84f91c0749-config-volume") pod "coredns-7db6d8ff4d-7496j" (UID: "462d92e2-f6b9-4559-82e4-3c84f91c0749") : failed to sync configmap cache: timed out waiting for the condition May 10 00:01:23.886847 containerd[1935]: time="2025-05-10T00:01:23.886767937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-t6jjx,Uid:872abefe-e998-4710-ae06-61521f1057d1,Namespace:calico-apiserver,Attempt:0,}" May 10 00:01:23.904090 containerd[1935]: time="2025-05-10T00:01:23.904025473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-q5m7h,Uid:6d08f91a-f116-4037-bee3-3fae03414811,Namespace:calico-apiserver,Attempt:0,}" May 10 00:01:24.024208 containerd[1935]: time="2025-05-10T00:01:24.023879722Z" level=error msg="Failed to destroy network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.025702 containerd[1935]: time="2025-05-10T00:01:24.024519274Z" level=error msg="encountered an error cleaning up failed sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.025702 containerd[1935]: time="2025-05-10T00:01:24.024608458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-t6jjx,Uid:872abefe-e998-4710-ae06-61521f1057d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.027404 kubelet[3394]: E0510 00:01:24.025009 3394 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.027404 kubelet[3394]: E0510 00:01:24.025110 3394 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444c9d58d-t6jjx" May 10 00:01:24.027404 kubelet[3394]: E0510 00:01:24.025167 3394 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444c9d58d-t6jjx" May 10 00:01:24.027639 kubelet[3394]: E0510 00:01:24.025252 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8444c9d58d-t6jjx_calico-apiserver(872abefe-e998-4710-ae06-61521f1057d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8444c9d58d-t6jjx_calico-apiserver(872abefe-e998-4710-ae06-61521f1057d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444c9d58d-t6jjx" podUID="872abefe-e998-4710-ae06-61521f1057d1" May 10 00:01:24.047784 containerd[1935]: time="2025-05-10T00:01:24.047642662Z" level=error msg="Failed to destroy network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.048336 containerd[1935]: time="2025-05-10T00:01:24.048282862Z" level=error msg="encountered an error cleaning up failed sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.048473 containerd[1935]: time="2025-05-10T00:01:24.048367930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-q5m7h,Uid:6d08f91a-f116-4037-bee3-3fae03414811,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.049084 kubelet[3394]: E0510 00:01:24.048812 3394 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.049084 kubelet[3394]: E0510 00:01:24.048900 3394 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444c9d58d-q5m7h" May 10 00:01:24.049084 kubelet[3394]: E0510 00:01:24.048932 3394 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8444c9d58d-q5m7h" May 10 00:01:24.049334 kubelet[3394]: E0510 00:01:24.049003 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8444c9d58d-q5m7h_calico-apiserver(6d08f91a-f116-4037-bee3-3fae03414811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8444c9d58d-q5m7h_calico-apiserver(6d08f91a-f116-4037-bee3-3fae03414811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444c9d58d-q5m7h" podUID="6d08f91a-f116-4037-bee3-3fae03414811" May 10 00:01:24.236124 systemd[1]: Created slice kubepods-besteffort-pod99d7a31f_c874_41d9_9abb_62b4e619c443.slice - libcontainer container kubepods-besteffort-pod99d7a31f_c874_41d9_9abb_62b4e619c443.slice. May 10 00:01:24.242067 containerd[1935]: time="2025-05-10T00:01:24.241996283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5xkj,Uid:99d7a31f-c874-41d9-9abb-62b4e619c443,Namespace:calico-system,Attempt:0,}" May 10 00:01:24.350597 containerd[1935]: time="2025-05-10T00:01:24.350177964Z" level=error msg="Failed to destroy network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.351817 containerd[1935]: time="2025-05-10T00:01:24.351529272Z" level=error msg="encountered an error cleaning up failed sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.351817 containerd[1935]: time="2025-05-10T00:01:24.351721332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5xkj,Uid:99d7a31f-c874-41d9-9abb-62b4e619c443,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.352135 kubelet[3394]: E0510 00:01:24.352083 3394 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.352214 kubelet[3394]: E0510 00:01:24.352163 3394 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g5xkj" May 10 00:01:24.352214 kubelet[3394]: E0510 00:01:24.352196 3394 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g5xkj" May 10 00:01:24.352439 kubelet[3394]: E0510 00:01:24.352263 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g5xkj_calico-system(99d7a31f-c874-41d9-9abb-62b4e619c443)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g5xkj_calico-system(99d7a31f-c874-41d9-9abb-62b4e619c443)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:24.404176 containerd[1935]: time="2025-05-10T00:01:24.403971072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7496j,Uid:462d92e2-f6b9-4559-82e4-3c84f91c0749,Namespace:kube-system,Attempt:0,}" May 10 00:01:24.443188 containerd[1935]: time="2025-05-10T00:01:24.442654404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnp79,Uid:4bfdbcc0-fe8b-4871-a536-daaca6c5cc23,Namespace:kube-system,Attempt:0,}" May 10 00:01:24.455821 kubelet[3394]: I0510 00:01:24.455090 3394 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:24.458288 containerd[1935]: time="2025-05-10T00:01:24.456870060Z" level=info msg="StopPodSandbox for \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\"" May 10 00:01:24.464828 containerd[1935]: time="2025-05-10T00:01:24.464732544Z" level=info msg="Ensure that sandbox fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab in task-service has been cleanup successfully" May 10 00:01:24.478389 kubelet[3394]: I0510 00:01:24.477692 3394 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:24.486046 containerd[1935]: time="2025-05-10T00:01:24.484995312Z" level=info msg="StopPodSandbox for \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\"" May 10 00:01:24.486046 containerd[1935]: time="2025-05-10T00:01:24.485297376Z" level=info msg="Ensure that sandbox 8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b in task-service has been cleanup successfully" May 10 00:01:24.489886 kubelet[3394]: I0510 00:01:24.489548 3394 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:24.493666 containerd[1935]: time="2025-05-10T00:01:24.490684188Z" level=info msg="StopPodSandbox for \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\"" May 10 00:01:24.493666 containerd[1935]: time="2025-05-10T00:01:24.491003232Z" level=info msg="Ensure that sandbox 4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c in task-service has been cleanup successfully" May 10 00:01:24.539460 containerd[1935]: time="2025-05-10T00:01:24.537084037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 00:01:24.559476 kubelet[3394]: I0510 00:01:24.559425 3394 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:24.564563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c-shm.mount: Deactivated successfully. May 10 00:01:24.575605 containerd[1935]: time="2025-05-10T00:01:24.575539753Z" level=info msg="StopPodSandbox for \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\"" May 10 00:01:24.578631 containerd[1935]: time="2025-05-10T00:01:24.577842157Z" level=info msg="Ensure that sandbox 88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933 in task-service has been cleanup successfully" May 10 00:01:24.654829 containerd[1935]: time="2025-05-10T00:01:24.654634453Z" level=error msg="StopPodSandbox for \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\" failed" error="failed to destroy network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.655344 kubelet[3394]: E0510 00:01:24.655277 3394 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:24.655470 kubelet[3394]: E0510 00:01:24.655360 3394 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab"} May 10 00:01:24.655470 kubelet[3394]: E0510 00:01:24.655447 3394 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99d7a31f-c874-41d9-9abb-62b4e619c443\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:24.655861 kubelet[3394]: E0510 00:01:24.655486 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99d7a31f-c874-41d9-9abb-62b4e619c443\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g5xkj" podUID="99d7a31f-c874-41d9-9abb-62b4e619c443" May 10 00:01:24.741146 containerd[1935]: time="2025-05-10T00:01:24.740841662Z" level=error msg="StopPodSandbox for \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\" failed" error="failed to destroy network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.742048 containerd[1935]: time="2025-05-10T00:01:24.741457538Z" level=error msg="StopPodSandbox for \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\" failed" error="failed to destroy network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.742234 kubelet[3394]: E0510 00:01:24.741673 3394 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:24.742234 kubelet[3394]: E0510 00:01:24.741734 3394 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933"} May 10 00:01:24.742234 kubelet[3394]: E0510 00:01:24.741811 3394 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec53ba6d-6936-42e6-ac4d-b9640fa50bea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:24.742234 kubelet[3394]: E0510 00:01:24.741866 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec53ba6d-6936-42e6-ac4d-b9640fa50bea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56879d8f96-w2m6s" podUID="ec53ba6d-6936-42e6-ac4d-b9640fa50bea" May 10 00:01:24.742587 kubelet[3394]: E0510 00:01:24.741675 3394 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:24.742587 kubelet[3394]: E0510 00:01:24.741924 3394 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b"} May 10 00:01:24.742587 kubelet[3394]: E0510 00:01:24.741968 3394 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d08f91a-f116-4037-bee3-3fae03414811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:24.742587 kubelet[3394]: E0510 00:01:24.742001 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d08f91a-f116-4037-bee3-3fae03414811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444c9d58d-q5m7h" podUID="6d08f91a-f116-4037-bee3-3fae03414811" May 10 00:01:24.743289 containerd[1935]: time="2025-05-10T00:01:24.743087690Z" level=error msg="StopPodSandbox for \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\" failed" error="failed to destroy network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.744094 kubelet[3394]: E0510 00:01:24.743673 3394 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:24.744094 kubelet[3394]: E0510 00:01:24.743753 3394 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c"} May 10 00:01:24.744094 kubelet[3394]: E0510 00:01:24.743930 3394 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"872abefe-e998-4710-ae06-61521f1057d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:24.744094 kubelet[3394]: E0510 00:01:24.743988 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"872abefe-e998-4710-ae06-61521f1057d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8444c9d58d-t6jjx" podUID="872abefe-e998-4710-ae06-61521f1057d1" May 10 00:01:24.745367 containerd[1935]: time="2025-05-10T00:01:24.745245170Z" level=error msg="Failed to destroy network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.746102 containerd[1935]: time="2025-05-10T00:01:24.746040758Z" level=error msg="encountered an error cleaning up failed sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.746234 containerd[1935]: time="2025-05-10T00:01:24.746131622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7496j,Uid:462d92e2-f6b9-4559-82e4-3c84f91c0749,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.748893 kubelet[3394]: E0510 00:01:24.746597 3394 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.748893 kubelet[3394]: E0510 00:01:24.746668 3394 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7496j" May 10 00:01:24.748893 kubelet[3394]: E0510 00:01:24.746701 3394 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7496j" May 10 00:01:24.749118 kubelet[3394]: E0510 00:01:24.746767 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7496j_kube-system(462d92e2-f6b9-4559-82e4-3c84f91c0749)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7496j_kube-system(462d92e2-f6b9-4559-82e4-3c84f91c0749)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7496j" podUID="462d92e2-f6b9-4559-82e4-3c84f91c0749" May 10 00:01:24.752193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b-shm.mount: Deactivated successfully. May 10 00:01:24.775691 containerd[1935]: time="2025-05-10T00:01:24.775627850Z" level=error msg="Failed to destroy network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.776638 containerd[1935]: time="2025-05-10T00:01:24.776400710Z" level=error msg="encountered an error cleaning up failed sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.776638 containerd[1935]: time="2025-05-10T00:01:24.776491730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnp79,Uid:4bfdbcc0-fe8b-4871-a536-daaca6c5cc23,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.778971 kubelet[3394]: E0510 00:01:24.776867 3394 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:24.778971 kubelet[3394]: E0510 00:01:24.776945 3394 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nnp79" May 10 00:01:24.778971 kubelet[3394]: E0510 00:01:24.776981 3394 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nnp79" May 10 00:01:24.779255 kubelet[3394]: E0510 00:01:24.777040 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nnp79_kube-system(4bfdbcc0-fe8b-4871-a536-daaca6c5cc23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nnp79_kube-system(4bfdbcc0-fe8b-4871-a536-daaca6c5cc23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nnp79" podUID="4bfdbcc0-fe8b-4871-a536-daaca6c5cc23" May 10 00:01:24.784825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945-shm.mount: Deactivated successfully. May 10 00:01:24.789764 kubelet[3394]: I0510 00:01:24.789446 3394 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:25.563682 kubelet[3394]: I0510 00:01:25.563379 3394 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:25.565323 containerd[1935]: time="2025-05-10T00:01:25.564531386Z" level=info msg="StopPodSandbox for \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\"" May 10 00:01:25.565323 containerd[1935]: time="2025-05-10T00:01:25.564903254Z" level=info msg="Ensure that sandbox 873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945 in task-service has been cleanup successfully" May 10 00:01:25.570880 kubelet[3394]: I0510 00:01:25.569715 3394 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:25.572737 containerd[1935]: time="2025-05-10T00:01:25.572059046Z" level=info msg="StopPodSandbox for \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\"" May 10 00:01:25.572737 containerd[1935]: time="2025-05-10T00:01:25.572339354Z" level=info msg="Ensure that sandbox 9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b in task-service has been cleanup successfully" May 10 00:01:25.637211 containerd[1935]: time="2025-05-10T00:01:25.637141682Z" level=error msg="StopPodSandbox for \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\" failed" error="failed to destroy network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:25.637922 kubelet[3394]: E0510 00:01:25.637660 3394 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:25.637922 kubelet[3394]: E0510 00:01:25.637733 3394 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945"} May 10 00:01:25.637922 kubelet[3394]: E0510 00:01:25.637830 3394 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:25.637922 kubelet[3394]: E0510 00:01:25.637873 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nnp79" podUID="4bfdbcc0-fe8b-4871-a536-daaca6c5cc23" May 10 00:01:25.651769 containerd[1935]: time="2025-05-10T00:01:25.651653594Z" level=error msg="StopPodSandbox for \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\" failed" error="failed to destroy network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:25.652207 kubelet[3394]: E0510 00:01:25.652085 3394 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:25.652415 kubelet[3394]: E0510 00:01:25.652381 3394 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b"} May 10 00:01:25.652553 kubelet[3394]: E0510 00:01:25.652523 3394 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"462d92e2-f6b9-4559-82e4-3c84f91c0749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:25.652746 kubelet[3394]: E0510 00:01:25.652711 3394 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"462d92e2-f6b9-4559-82e4-3c84f91c0749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7496j" podUID="462d92e2-f6b9-4559-82e4-3c84f91c0749" May 10 00:01:27.556721 systemd[1]: Started sshd@9-172.31.18.167:22-147.75.109.163:34084.service - OpenSSH per-connection server daemon (147.75.109.163:34084). May 10 00:01:27.747008 sshd[4495]: Accepted publickey for core from 147.75.109.163 port 34084 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:27.749940 sshd[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:27.762694 systemd-logind[1910]: New session 10 of user core. May 10 00:01:27.767459 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:01:28.077952 sshd[4495]: pam_unix(sshd:session): session closed for user core May 10 00:01:28.085470 systemd-logind[1910]: Session 10 logged out. Waiting for processes to exit. May 10 00:01:28.085907 systemd[1]: sshd@9-172.31.18.167:22-147.75.109.163:34084.service: Deactivated successfully. May 10 00:01:28.092253 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:01:28.100853 systemd-logind[1910]: Removed session 10. May 10 00:01:31.142082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253039291.mount: Deactivated successfully. May 10 00:01:31.212685 containerd[1935]: time="2025-05-10T00:01:31.212603994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:31.214241 containerd[1935]: time="2025-05-10T00:01:31.214154526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 10 00:01:31.215391 containerd[1935]: time="2025-05-10T00:01:31.215232138Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:31.220376 containerd[1935]: time="2025-05-10T00:01:31.220262262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:31.222111 containerd[1935]: time="2025-05-10T00:01:31.221514042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.682094577s" May 10 00:01:31.222111 containerd[1935]: time="2025-05-10T00:01:31.221578218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 10 00:01:31.261117 containerd[1935]: time="2025-05-10T00:01:31.261028914Z" level=info msg="CreateContainer within sandbox \"6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 00:01:31.297023 containerd[1935]: time="2025-05-10T00:01:31.296948190Z" level=info msg="CreateContainer within sandbox \"6ccaf520d0ba6d2e461a427415c035d3486cdae01b50b7eb4e1539c5d22b2499\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dc97d5a04b88e8dbe529cd2c9ffeefc76d3f96e3e9f87e89702f6aa903fa7a5c\"" May 10 00:01:31.298553 containerd[1935]: time="2025-05-10T00:01:31.297956850Z" level=info msg="StartContainer for \"dc97d5a04b88e8dbe529cd2c9ffeefc76d3f96e3e9f87e89702f6aa903fa7a5c\"" May 10 00:01:31.344138 systemd[1]: Started cri-containerd-dc97d5a04b88e8dbe529cd2c9ffeefc76d3f96e3e9f87e89702f6aa903fa7a5c.scope - libcontainer container dc97d5a04b88e8dbe529cd2c9ffeefc76d3f96e3e9f87e89702f6aa903fa7a5c. May 10 00:01:31.400171 containerd[1935]: time="2025-05-10T00:01:31.400012195Z" level=info msg="StartContainer for \"dc97d5a04b88e8dbe529cd2c9ffeefc76d3f96e3e9f87e89702f6aa903fa7a5c\" returns successfully" May 10 00:01:31.540368 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 00:01:31.540559 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 00:01:31.636190 kubelet[3394]: I0510 00:01:31.636078 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f66gn" podStartSLOduration=1.269040757 podStartE2EDuration="19.633943904s" podCreationTimestamp="2025-05-10 00:01:12 +0000 UTC" firstStartedPulling="2025-05-10 00:01:12.858233691 +0000 UTC m=+23.928986304" lastFinishedPulling="2025-05-10 00:01:31.22313685 +0000 UTC m=+42.293889451" observedRunningTime="2025-05-10 00:01:31.63074576 +0000 UTC m=+42.701498385" watchObservedRunningTime="2025-05-10 00:01:31.633943904 +0000 UTC m=+42.704696517" May 10 00:01:33.131337 systemd[1]: Started sshd@10-172.31.18.167:22-147.75.109.163:34098.service - OpenSSH per-connection server daemon (147.75.109.163:34098). May 10 00:01:33.374503 sshd[4628]: Accepted publickey for core from 147.75.109.163 port 34098 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:33.385524 sshd[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:33.398204 systemd-logind[1910]: New session 11 of user core. May 10 00:01:33.419121 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:01:33.785866 kernel: bpftool[4701]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 00:01:33.791132 sshd[4628]: pam_unix(sshd:session): session closed for user core May 10 00:01:33.801520 systemd-logind[1910]: Session 11 logged out. Waiting for processes to exit. May 10 00:01:33.801608 systemd[1]: sshd@10-172.31.18.167:22-147.75.109.163:34098.service: Deactivated successfully. May 10 00:01:33.812300 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:01:33.818716 systemd-logind[1910]: Removed session 11. May 10 00:01:34.175336 systemd-networkd[1846]: vxlan.calico: Link UP May 10 00:01:34.175352 systemd-networkd[1846]: vxlan.calico: Gained carrier May 10 00:01:34.176452 (udev-worker)[4558]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:34.225681 (udev-worker)[4555]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:34.483325 kubelet[3394]: I0510 00:01:34.482472 3394 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:35.654117 systemd-networkd[1846]: vxlan.calico: Gained IPv6LL May 10 00:01:36.226808 containerd[1935]: time="2025-05-10T00:01:36.226062347Z" level=info msg="StopPodSandbox for \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\"" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.367 [INFO][4850] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.368 [INFO][4850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" iface="eth0" netns="/var/run/netns/cni-66686e24-2786-450b-edf3-84cbacb6ad4a" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.369 [INFO][4850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" iface="eth0" netns="/var/run/netns/cni-66686e24-2786-450b-edf3-84cbacb6ad4a" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.372 [INFO][4850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" iface="eth0" netns="/var/run/netns/cni-66686e24-2786-450b-edf3-84cbacb6ad4a" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.372 [INFO][4850] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.372 [INFO][4850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.420 [INFO][4857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.420 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.420 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.435 [WARNING][4857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.435 [INFO][4857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.438 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:36.446847 containerd[1935]: 2025-05-10 00:01:36.443 [INFO][4850] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:36.448994 containerd[1935]: time="2025-05-10T00:01:36.448867464Z" level=info msg="TearDown network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\" successfully" May 10 00:01:36.448994 containerd[1935]: time="2025-05-10T00:01:36.448949184Z" level=info msg="StopPodSandbox for \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\" returns successfully" May 10 00:01:36.452305 containerd[1935]: time="2025-05-10T00:01:36.452242380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-q5m7h,Uid:6d08f91a-f116-4037-bee3-3fae03414811,Namespace:calico-apiserver,Attempt:1,}" May 10 00:01:36.453531 systemd[1]: run-netns-cni\x2d66686e24\x2d2786\x2d450b\x2dedf3\x2d84cbacb6ad4a.mount: Deactivated successfully. May 10 00:01:36.687491 systemd-networkd[1846]: caliefb4d0bd527: Link UP May 10 00:01:36.689320 systemd-networkd[1846]: caliefb4d0bd527: Gained carrier May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.545 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0 calico-apiserver-8444c9d58d- calico-apiserver 6d08f91a-f116-4037-bee3-3fae03414811 838 0 2025-05-10 00:01:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8444c9d58d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-167 calico-apiserver-8444c9d58d-q5m7h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliefb4d0bd527 [] []}} ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.545 [INFO][4864] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.602 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" HandleID="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.623 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" HandleID="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cc60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-167", "pod":"calico-apiserver-8444c9d58d-q5m7h", "timestamp":"2025-05-10 00:01:36.602883877 +0000 UTC"}, Hostname:"ip-172-31-18-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.623 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.623 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.623 [INFO][4876] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-167' May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.626 [INFO][4876] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.632 [INFO][4876] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.639 [INFO][4876] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.643 [INFO][4876] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.646 [INFO][4876] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.646 [INFO][4876] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.648 [INFO][4876] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.668 [INFO][4876] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.677 [INFO][4876] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.129/26] block=192.168.75.128/26 handle="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.677 [INFO][4876] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.129/26] handle="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" host="ip-172-31-18-167" May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.677 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:36.724001 containerd[1935]: 2025-05-10 00:01:36.677 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.129/26] IPv6=[] ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" HandleID="k8s-pod-network.7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.726373 containerd[1935]: 2025-05-10 00:01:36.681 [INFO][4864] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d08f91a-f116-4037-bee3-3fae03414811", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"", Pod:"calico-apiserver-8444c9d58d-q5m7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefb4d0bd527", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:36.726373 containerd[1935]: 2025-05-10 00:01:36.681 [INFO][4864] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.129/32] ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.726373 containerd[1935]: 2025-05-10 00:01:36.681 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefb4d0bd527 ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.726373 containerd[1935]: 2025-05-10 00:01:36.693 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.726373 containerd[1935]: 2025-05-10 00:01:36.694 [INFO][4864] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d08f91a-f116-4037-bee3-3fae03414811", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b", Pod:"calico-apiserver-8444c9d58d-q5m7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefb4d0bd527", MAC:"7a:89:22:d4:9a:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:36.726373 containerd[1935]: 2025-05-10 00:01:36.714 [INFO][4864] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-q5m7h" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:36.769565 containerd[1935]: time="2025-05-10T00:01:36.769232533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:36.769565 containerd[1935]: time="2025-05-10T00:01:36.769401409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:36.769565 containerd[1935]: time="2025-05-10T00:01:36.769442725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:36.770402 containerd[1935]: time="2025-05-10T00:01:36.769619965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:36.824160 systemd[1]: Started cri-containerd-7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b.scope - libcontainer container 7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b. May 10 00:01:36.891760 containerd[1935]: time="2025-05-10T00:01:36.891659978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-q5m7h,Uid:6d08f91a-f116-4037-bee3-3fae03414811,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b\"" May 10 00:01:36.896016 containerd[1935]: time="2025-05-10T00:01:36.895954214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:01:37.227525 containerd[1935]: time="2025-05-10T00:01:37.227401392Z" level=info msg="StopPodSandbox for \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\"" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.320 [INFO][4953] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.320 [INFO][4953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" iface="eth0" netns="/var/run/netns/cni-44c911c0-45dd-89cd-5d79-92eebf384905" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.324 [INFO][4953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" iface="eth0" netns="/var/run/netns/cni-44c911c0-45dd-89cd-5d79-92eebf384905" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.326 [INFO][4953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" iface="eth0" netns="/var/run/netns/cni-44c911c0-45dd-89cd-5d79-92eebf384905" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.326 [INFO][4953] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.326 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.372 [INFO][4960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.373 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.373 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.385 [WARNING][4960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.386 [INFO][4960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.388 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:37.394423 containerd[1935]: 2025-05-10 00:01:37.390 [INFO][4953] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:37.395651 containerd[1935]: time="2025-05-10T00:01:37.395590933Z" level=info msg="TearDown network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\" successfully" May 10 00:01:37.395651 containerd[1935]: time="2025-05-10T00:01:37.395642773Z" level=info msg="StopPodSandbox for \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\" returns successfully" May 10 00:01:37.397441 containerd[1935]: time="2025-05-10T00:01:37.396869665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7496j,Uid:462d92e2-f6b9-4559-82e4-3c84f91c0749,Namespace:kube-system,Attempt:1,}" May 10 00:01:37.458351 systemd[1]: run-netns-cni\x2d44c911c0\x2d45dd\x2d89cd\x2d5d79\x2d92eebf384905.mount: Deactivated successfully. May 10 00:01:37.611012 systemd-networkd[1846]: calibbcf15b207f: Link UP May 10 00:01:37.611412 systemd-networkd[1846]: calibbcf15b207f: Gained carrier May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.477 [INFO][4966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0 coredns-7db6d8ff4d- kube-system 462d92e2-f6b9-4559-82e4-3c84f91c0749 853 0 2025-05-10 00:01:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-167 coredns-7db6d8ff4d-7496j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbcf15b207f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.477 [INFO][4966] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.530 [INFO][4979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" HandleID="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.548 [INFO][4979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" HandleID="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d480), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-167", "pod":"coredns-7db6d8ff4d-7496j", "timestamp":"2025-05-10 00:01:37.529988173 +0000 UTC"}, Hostname:"ip-172-31-18-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.548 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.549 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.549 [INFO][4979] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-167' May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.552 [INFO][4979] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.560 [INFO][4979] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.568 [INFO][4979] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.571 [INFO][4979] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.575 [INFO][4979] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.575 [INFO][4979] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.577 [INFO][4979] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.584 [INFO][4979] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.595 [INFO][4979] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.130/26] block=192.168.75.128/26 handle="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.595 [INFO][4979] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.130/26] handle="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" host="ip-172-31-18-167" May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.596 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:37.651890 containerd[1935]: 2025-05-10 00:01:37.596 [INFO][4979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.130/26] IPv6=[] ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" HandleID="k8s-pod-network.518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.653765 containerd[1935]: 2025-05-10 00:01:37.600 [INFO][4966] cni-plugin/k8s.go 386: Populated endpoint ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"462d92e2-f6b9-4559-82e4-3c84f91c0749", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"", Pod:"coredns-7db6d8ff4d-7496j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbcf15b207f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:37.653765 containerd[1935]: 2025-05-10 00:01:37.601 [INFO][4966] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.130/32] ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.653765 containerd[1935]: 2025-05-10 00:01:37.601 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbcf15b207f ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.653765 containerd[1935]: 2025-05-10 00:01:37.614 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.653765 containerd[1935]: 2025-05-10 00:01:37.615 [INFO][4966] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"462d92e2-f6b9-4559-82e4-3c84f91c0749", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c", Pod:"coredns-7db6d8ff4d-7496j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbcf15b207f", MAC:"66:40:f2:b7:a3:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:37.653765 containerd[1935]: 2025-05-10 00:01:37.647 [INFO][4966] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7496j" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:37.702674 containerd[1935]: time="2025-05-10T00:01:37.702156710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:37.702674 containerd[1935]: time="2025-05-10T00:01:37.702245282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:37.702674 containerd[1935]: time="2025-05-10T00:01:37.702270794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:37.702674 containerd[1935]: time="2025-05-10T00:01:37.702445274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:37.744641 systemd[1]: run-containerd-runc-k8s.io-518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c-runc.zHgLtp.mount: Deactivated successfully. May 10 00:01:37.755111 systemd[1]: Started cri-containerd-518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c.scope - libcontainer container 518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c. May 10 00:01:37.839395 containerd[1935]: time="2025-05-10T00:01:37.839215179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7496j,Uid:462d92e2-f6b9-4559-82e4-3c84f91c0749,Namespace:kube-system,Attempt:1,} returns sandbox id \"518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c\"" May 10 00:01:37.846971 containerd[1935]: time="2025-05-10T00:01:37.846555315Z" level=info msg="CreateContainer within sandbox \"518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:01:37.876124 containerd[1935]: time="2025-05-10T00:01:37.875992167Z" level=info msg="CreateContainer within sandbox \"518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a06728b824927c7e12fff6982df2c7d2eb770bb506a3a2da9893a90e1ed8e2f5\"" May 10 00:01:37.877629 containerd[1935]: time="2025-05-10T00:01:37.877476879Z" level=info msg="StartContainer for \"a06728b824927c7e12fff6982df2c7d2eb770bb506a3a2da9893a90e1ed8e2f5\"" May 10 00:01:37.925100 systemd[1]: Started cri-containerd-a06728b824927c7e12fff6982df2c7d2eb770bb506a3a2da9893a90e1ed8e2f5.scope - libcontainer container a06728b824927c7e12fff6982df2c7d2eb770bb506a3a2da9893a90e1ed8e2f5. May 10 00:01:37.982495 containerd[1935]: time="2025-05-10T00:01:37.982404399Z" level=info msg="StartContainer for \"a06728b824927c7e12fff6982df2c7d2eb770bb506a3a2da9893a90e1ed8e2f5\" returns successfully" May 10 00:01:38.226627 containerd[1935]: time="2025-05-10T00:01:38.226501777Z" level=info msg="StopPodSandbox for \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\"" May 10 00:01:38.343923 systemd-networkd[1846]: caliefb4d0bd527: Gained IPv6LL May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.339 [INFO][5090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.340 [INFO][5090] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" iface="eth0" netns="/var/run/netns/cni-61da0c78-35d6-bc75-e1ac-bf22b038a8c6" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.340 [INFO][5090] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" iface="eth0" netns="/var/run/netns/cni-61da0c78-35d6-bc75-e1ac-bf22b038a8c6" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.341 [INFO][5090] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" iface="eth0" netns="/var/run/netns/cni-61da0c78-35d6-bc75-e1ac-bf22b038a8c6" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.341 [INFO][5090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.341 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.382 [INFO][5097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.383 [INFO][5097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.383 [INFO][5097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.400 [WARNING][5097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.401 [INFO][5097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.408 [INFO][5097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:38.417111 containerd[1935]: 2025-05-10 00:01:38.412 [INFO][5090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:38.417111 containerd[1935]: time="2025-05-10T00:01:38.416427386Z" level=info msg="TearDown network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\" successfully" May 10 00:01:38.417111 containerd[1935]: time="2025-05-10T00:01:38.416464862Z" level=info msg="StopPodSandbox for \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\" returns successfully" May 10 00:01:38.419939 containerd[1935]: time="2025-05-10T00:01:38.418330682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-t6jjx,Uid:872abefe-e998-4710-ae06-61521f1057d1,Namespace:calico-apiserver,Attempt:1,}" May 10 00:01:38.464341 systemd[1]: run-netns-cni\x2d61da0c78\x2d35d6\x2dbc75\x2de1ac\x2dbf22b038a8c6.mount: Deactivated successfully. May 10 00:01:38.712418 kubelet[3394]: I0510 00:01:38.710915 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7496j" podStartSLOduration=36.710892447 podStartE2EDuration="36.710892447s" podCreationTimestamp="2025-05-10 00:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:38.683405295 +0000 UTC m=+49.754158148" watchObservedRunningTime="2025-05-10 00:01:38.710892447 +0000 UTC m=+49.781645060" May 10 00:01:38.728019 systemd-networkd[1846]: calibbcf15b207f: Gained IPv6LL May 10 00:01:38.734037 systemd-networkd[1846]: cali66bc3ef4261: Link UP May 10 00:01:38.735958 systemd-networkd[1846]: cali66bc3ef4261: Gained carrier May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.551 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0 calico-apiserver-8444c9d58d- calico-apiserver 872abefe-e998-4710-ae06-61521f1057d1 867 0 2025-05-10 00:01:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8444c9d58d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-167 calico-apiserver-8444c9d58d-t6jjx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali66bc3ef4261 [] []}} ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.552 [INFO][5104] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.615 [INFO][5117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" HandleID="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.634 [INFO][5117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" HandleID="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-167", "pod":"calico-apiserver-8444c9d58d-t6jjx", "timestamp":"2025-05-10 00:01:38.615043671 +0000 UTC"}, Hostname:"ip-172-31-18-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.635 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.635 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.635 [INFO][5117] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-167' May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.638 [INFO][5117] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.647 [INFO][5117] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.660 [INFO][5117] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.664 [INFO][5117] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.669 [INFO][5117] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.669 [INFO][5117] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.674 [INFO][5117] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.690 [INFO][5117] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.707 [INFO][5117] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.131/26] block=192.168.75.128/26 handle="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.708 [INFO][5117] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.131/26] handle="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" host="ip-172-31-18-167" May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.709 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:38.790263 containerd[1935]: 2025-05-10 00:01:38.709 [INFO][5117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.131/26] IPv6=[] ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" HandleID="k8s-pod-network.89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.793973 containerd[1935]: 2025-05-10 00:01:38.719 [INFO][5104] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"872abefe-e998-4710-ae06-61521f1057d1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"", Pod:"calico-apiserver-8444c9d58d-t6jjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66bc3ef4261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:38.793973 containerd[1935]: 2025-05-10 00:01:38.719 [INFO][5104] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.131/32] ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.793973 containerd[1935]: 2025-05-10 00:01:38.719 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66bc3ef4261 ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.793973 containerd[1935]: 2025-05-10 00:01:38.737 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.793973 containerd[1935]: 2025-05-10 00:01:38.738 [INFO][5104] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"872abefe-e998-4710-ae06-61521f1057d1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a", Pod:"calico-apiserver-8444c9d58d-t6jjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66bc3ef4261", MAC:"3a:0a:27:48:5b:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:38.793973 containerd[1935]: 2025-05-10 00:01:38.784 [INFO][5104] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a" Namespace="calico-apiserver" Pod="calico-apiserver-8444c9d58d-t6jjx" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:38.841431 systemd[1]: Started sshd@11-172.31.18.167:22-147.75.109.163:52908.service - OpenSSH per-connection server daemon (147.75.109.163:52908). May 10 00:01:38.856886 containerd[1935]: time="2025-05-10T00:01:38.856034812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:38.856886 containerd[1935]: time="2025-05-10T00:01:38.856127500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:38.856886 containerd[1935]: time="2025-05-10T00:01:38.856163776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:38.856886 containerd[1935]: time="2025-05-10T00:01:38.856338592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:38.929908 systemd[1]: run-containerd-runc-k8s.io-89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a-runc.421ffa.mount: Deactivated successfully. May 10 00:01:38.940119 systemd[1]: Started cri-containerd-89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a.scope - libcontainer container 89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a. May 10 00:01:39.016296 containerd[1935]: time="2025-05-10T00:01:39.016230277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8444c9d58d-t6jjx,Uid:872abefe-e998-4710-ae06-61521f1057d1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a\"" May 10 00:01:39.079479 sshd[5150]: Accepted publickey for core from 147.75.109.163 port 52908 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:39.082873 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:39.092128 systemd-logind[1910]: New session 12 of user core. May 10 00:01:39.099019 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:01:39.231385 containerd[1935]: time="2025-05-10T00:01:39.229476122Z" level=info msg="StopPodSandbox for \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\"" May 10 00:01:39.231385 containerd[1935]: time="2025-05-10T00:01:39.230295950Z" level=info msg="StopPodSandbox for \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\"" May 10 00:01:39.232134 containerd[1935]: time="2025-05-10T00:01:39.232067402Z" level=info msg="StopPodSandbox for \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\"" May 10 00:01:39.460212 sshd[5150]: pam_unix(sshd:session): session closed for user core May 10 00:01:39.475754 systemd[1]: sshd@11-172.31.18.167:22-147.75.109.163:52908.service: Deactivated successfully. May 10 00:01:39.486515 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:01:39.491678 systemd-logind[1910]: Session 12 logged out. Waiting for processes to exit. May 10 00:01:39.520221 systemd[1]: Started sshd@12-172.31.18.167:22-147.75.109.163:52918.service - OpenSSH per-connection server daemon (147.75.109.163:52918). May 10 00:01:39.524242 systemd-logind[1910]: Removed session 12. May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.568 [INFO][5238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.572 [INFO][5238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" iface="eth0" netns="/var/run/netns/cni-7d1f5680-daad-ceec-7991-05ce7769bed5" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.574 [INFO][5238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" iface="eth0" netns="/var/run/netns/cni-7d1f5680-daad-ceec-7991-05ce7769bed5" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.577 [INFO][5238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" iface="eth0" netns="/var/run/netns/cni-7d1f5680-daad-ceec-7991-05ce7769bed5" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.577 [INFO][5238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.577 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.668 [INFO][5266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.669 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.669 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.693 [WARNING][5266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.693 [INFO][5266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.697 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:39.718149 containerd[1935]: 2025-05-10 00:01:39.706 [INFO][5238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:39.731729 containerd[1935]: time="2025-05-10T00:01:39.729944992Z" level=info msg="TearDown network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\" successfully" May 10 00:01:39.731729 containerd[1935]: time="2025-05-10T00:01:39.730013548Z" level=info msg="StopPodSandbox for \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\" returns successfully" May 10 00:01:39.732111 systemd[1]: run-netns-cni\x2d7d1f5680\x2ddaad\x2dceec\x2d7991\x2d05ce7769bed5.mount: Deactivated successfully. May 10 00:01:39.733624 sshd[5258]: Accepted publickey for core from 147.75.109.163 port 52918 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:39.738621 containerd[1935]: time="2025-05-10T00:01:39.737206552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5xkj,Uid:99d7a31f-c874-41d9-9abb-62b4e619c443,Namespace:calico-system,Attempt:1,}" May 10 00:01:39.738749 sshd[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.505 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.509 [INFO][5218] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" iface="eth0" netns="/var/run/netns/cni-77716bb2-41b4-70b5-3d6b-b2209f9fde27" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.511 [INFO][5218] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" iface="eth0" netns="/var/run/netns/cni-77716bb2-41b4-70b5-3d6b-b2209f9fde27" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.515 [INFO][5218] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" iface="eth0" netns="/var/run/netns/cni-77716bb2-41b4-70b5-3d6b-b2209f9fde27" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.518 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.520 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.691 [INFO][5260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.691 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.698 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.725 [WARNING][5260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.732 [INFO][5260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.741 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:39.789041 containerd[1935]: 2025-05-10 00:01:39.751 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:39.807967 containerd[1935]: time="2025-05-10T00:01:39.807902632Z" level=info msg="TearDown network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\" successfully" May 10 00:01:39.808149 containerd[1935]: time="2025-05-10T00:01:39.807957520Z" level=info msg="StopPodSandbox for \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\" returns successfully" May 10 00:01:39.809406 systemd[1]: run-netns-cni\x2d77716bb2\x2d41b4\x2d70b5\x2d3d6b\x2db2209f9fde27.mount: Deactivated successfully. May 10 00:01:39.809918 containerd[1935]: time="2025-05-10T00:01:39.809740265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnp79,Uid:4bfdbcc0-fe8b-4871-a536-daaca6c5cc23,Namespace:kube-system,Attempt:1,}" May 10 00:01:39.815458 systemd-networkd[1846]: cali66bc3ef4261: Gained IPv6LL May 10 00:01:39.817550 systemd-logind[1910]: New session 13 of user core. May 10 00:01:39.825112 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.593 [INFO][5242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.595 [INFO][5242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" iface="eth0" netns="/var/run/netns/cni-357e4871-1853-150b-46c5-7b6fead265d7" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.595 [INFO][5242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" iface="eth0" netns="/var/run/netns/cni-357e4871-1853-150b-46c5-7b6fead265d7" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.596 [INFO][5242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" iface="eth0" netns="/var/run/netns/cni-357e4871-1853-150b-46c5-7b6fead265d7" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.597 [INFO][5242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.599 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.744 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.746 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.746 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.839 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.840 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.846 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:39.888187 containerd[1935]: 2025-05-10 00:01:39.863 [INFO][5242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:39.889597 containerd[1935]: time="2025-05-10T00:01:39.888438881Z" level=info msg="TearDown network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\" successfully" May 10 00:01:39.889908 containerd[1935]: time="2025-05-10T00:01:39.889500113Z" level=info msg="StopPodSandbox for \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\" returns successfully" May 10 00:01:39.891332 containerd[1935]: time="2025-05-10T00:01:39.891241097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56879d8f96-w2m6s,Uid:ec53ba6d-6936-42e6-ac4d-b9640fa50bea,Namespace:calico-system,Attempt:1,}" May 10 00:01:40.367191 sshd[5258]: pam_unix(sshd:session): session closed for user core May 10 00:01:40.377671 systemd[1]: sshd@12-172.31.18.167:22-147.75.109.163:52918.service: Deactivated successfully. May 10 00:01:40.390206 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:01:40.395058 systemd-logind[1910]: Session 13 logged out. Waiting for processes to exit. May 10 00:01:40.422323 systemd[1]: Started sshd@13-172.31.18.167:22-147.75.109.163:52920.service - OpenSSH per-connection server daemon (147.75.109.163:52920). May 10 00:01:40.427088 systemd-logind[1910]: Removed session 13. May 10 00:01:40.492417 systemd-networkd[1846]: cali2e6659998bd: Link UP May 10 00:01:40.492764 systemd-networkd[1846]: cali2e6659998bd: Gained carrier May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:39.992 [INFO][5288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0 csi-node-driver- calico-system 99d7a31f-c874-41d9-9abb-62b4e619c443 887 0 2025-05-10 00:01:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-167 csi-node-driver-g5xkj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2e6659998bd [] []}} ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:39.994 [INFO][5288] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.254 [INFO][5331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" HandleID="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.298 [INFO][5331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" HandleID="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000557d60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-167", "pod":"csi-node-driver-g5xkj", "timestamp":"2025-05-10 00:01:40.254568435 +0000 UTC"}, Hostname:"ip-172-31-18-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.302 [INFO][5331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.304 [INFO][5331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.306 [INFO][5331] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-167' May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.312 [INFO][5331] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.325 [INFO][5331] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.339 [INFO][5331] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.356 [INFO][5331] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.364 [INFO][5331] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.364 [INFO][5331] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.380 [INFO][5331] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3 May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.415 [INFO][5331] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.463 [INFO][5331] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.132/26] block=192.168.75.128/26 handle="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.465 [INFO][5331] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.132/26] handle="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" host="ip-172-31-18-167" May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.466 [INFO][5331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:40.544364 containerd[1935]: 2025-05-10 00:01:40.467 [INFO][5331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.132/26] IPv6=[] ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" HandleID="k8s-pod-network.abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:40.546048 containerd[1935]: 2025-05-10 00:01:40.484 [INFO][5288] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99d7a31f-c874-41d9-9abb-62b4e619c443", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"", Pod:"csi-node-driver-g5xkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e6659998bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:40.546048 containerd[1935]: 2025-05-10 00:01:40.484 [INFO][5288] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.132/32] ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:40.546048 containerd[1935]: 2025-05-10 00:01:40.484 [INFO][5288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e6659998bd ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:40.546048 containerd[1935]: 2025-05-10 00:01:40.491 [INFO][5288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:40.546048 containerd[1935]: 2025-05-10 00:01:40.491 [INFO][5288] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99d7a31f-c874-41d9-9abb-62b4e619c443", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3", Pod:"csi-node-driver-g5xkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e6659998bd", MAC:"8e:be:55:45:46:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:40.546048 containerd[1935]: 2025-05-10 00:01:40.534 [INFO][5288] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3" Namespace="calico-system" Pod="csi-node-driver-g5xkj" WorkloadEndpoint="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:40.651584 containerd[1935]: time="2025-05-10T00:01:40.648123437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:40.651584 containerd[1935]: time="2025-05-10T00:01:40.648243461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:40.651584 containerd[1935]: time="2025-05-10T00:01:40.648272345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:40.651584 containerd[1935]: time="2025-05-10T00:01:40.648436601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:40.676840 sshd[5355]: Accepted publickey for core from 147.75.109.163 port 52920 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:40.694101 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:40.714306 systemd-logind[1910]: New session 14 of user core. May 10 00:01:40.716448 systemd-networkd[1846]: cali560ce8ca877: Link UP May 10 00:01:40.722582 systemd-networkd[1846]: cali560ce8ca877: Gained carrier May 10 00:01:40.729883 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:01:40.746042 systemd[1]: run-netns-cni\x2d357e4871\x2d1853\x2d150b\x2d46c5\x2d7b6fead265d7.mount: Deactivated successfully. May 10 00:01:40.778082 systemd[1]: Started cri-containerd-abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3.scope - libcontainer container abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3. May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.053 [INFO][5299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0 coredns-7db6d8ff4d- kube-system 4bfdbcc0-fe8b-4871-a536-daaca6c5cc23 886 0 2025-05-10 00:01:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-167 coredns-7db6d8ff4d-nnp79 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali560ce8ca877 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.056 [INFO][5299] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.261 [INFO][5336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" HandleID="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.309 [INFO][5336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" HandleID="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-167", "pod":"coredns-7db6d8ff4d-nnp79", "timestamp":"2025-05-10 00:01:40.260321031 +0000 UTC"}, Hostname:"ip-172-31-18-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.310 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.466 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.469 [INFO][5336] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-167' May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.477 [INFO][5336] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.498 [INFO][5336] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.535 [INFO][5336] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.543 [INFO][5336] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.562 [INFO][5336] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.562 [INFO][5336] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.571 [INFO][5336] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7 May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.587 [INFO][5336] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.622 [INFO][5336] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.133/26] block=192.168.75.128/26 handle="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.625 [INFO][5336] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.133/26] handle="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" host="ip-172-31-18-167" May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.627 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:40.805570 containerd[1935]: 2025-05-10 00:01:40.631 [INFO][5336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.133/26] IPv6=[] ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" HandleID="k8s-pod-network.aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:40.810921 containerd[1935]: 2025-05-10 00:01:40.656 [INFO][5299] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"", Pod:"coredns-7db6d8ff4d-nnp79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali560ce8ca877", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:40.810921 containerd[1935]: 2025-05-10 00:01:40.656 [INFO][5299] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.133/32] ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:40.810921 containerd[1935]: 2025-05-10 00:01:40.656 [INFO][5299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali560ce8ca877 ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:40.810921 containerd[1935]: 2025-05-10 00:01:40.724 [INFO][5299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:40.810921 containerd[1935]: 2025-05-10 00:01:40.728 [INFO][5299] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7", Pod:"coredns-7db6d8ff4d-nnp79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali560ce8ca877", MAC:"3a:17:9d:2c:b6:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:40.810921 containerd[1935]: 2025-05-10 00:01:40.785 [INFO][5299] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnp79" WorkloadEndpoint="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:40.922934 systemd-networkd[1846]: cali361b1fd048f: Link UP May 10 00:01:40.942299 systemd-networkd[1846]: cali361b1fd048f: Gained carrier May 10 00:01:40.993889 containerd[1935]: time="2025-05-10T00:01:40.991556382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:40.993889 containerd[1935]: time="2025-05-10T00:01:40.991663818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:40.993889 containerd[1935]: time="2025-05-10T00:01:40.991693110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:40.995390 containerd[1935]: time="2025-05-10T00:01:40.995268834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.126 [INFO][5316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0 calico-kube-controllers-56879d8f96- calico-system ec53ba6d-6936-42e6-ac4d-b9640fa50bea 888 0 2025-05-10 00:01:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56879d8f96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-167 calico-kube-controllers-56879d8f96-w2m6s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali361b1fd048f [] []}} ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.128 [INFO][5316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.312 [INFO][5342] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" HandleID="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.357 [INFO][5342] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" HandleID="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034e420), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-167", "pod":"calico-kube-controllers-56879d8f96-w2m6s", "timestamp":"2025-05-10 00:01:40.312457935 +0000 UTC"}, Hostname:"ip-172-31-18-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.357 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.627 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.632 [INFO][5342] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-167' May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.646 [INFO][5342] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.716 [INFO][5342] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.756 [INFO][5342] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.780 [INFO][5342] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.797 [INFO][5342] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.797 [INFO][5342] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.805 [INFO][5342] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29 May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.820 [INFO][5342] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.877 [INFO][5342] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.134/26] block=192.168.75.128/26 handle="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.877 [INFO][5342] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.134/26] handle="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" host="ip-172-31-18-167" May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.877 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:41.090033 containerd[1935]: 2025-05-10 00:01:40.877 [INFO][5342] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.134/26] IPv6=[] ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" HandleID="k8s-pod-network.7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:41.101038 containerd[1935]: 2025-05-10 00:01:40.902 [INFO][5316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0", GenerateName:"calico-kube-controllers-56879d8f96-", Namespace:"calico-system", SelfLink:"", UID:"ec53ba6d-6936-42e6-ac4d-b9640fa50bea", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56879d8f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"", Pod:"calico-kube-controllers-56879d8f96-w2m6s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali361b1fd048f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:41.101038 containerd[1935]: 2025-05-10 00:01:40.902 [INFO][5316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.134/32] ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:41.101038 containerd[1935]: 2025-05-10 00:01:40.902 [INFO][5316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali361b1fd048f ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:41.101038 containerd[1935]: 2025-05-10 00:01:40.957 [INFO][5316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:41.101038 containerd[1935]: 2025-05-10 00:01:40.975 [INFO][5316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0", GenerateName:"calico-kube-controllers-56879d8f96-", Namespace:"calico-system", SelfLink:"", UID:"ec53ba6d-6936-42e6-ac4d-b9640fa50bea", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56879d8f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29", Pod:"calico-kube-controllers-56879d8f96-w2m6s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali361b1fd048f", MAC:"06:52:45:91:93:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:41.101038 containerd[1935]: 2025-05-10 00:01:41.075 [INFO][5316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29" Namespace="calico-system" Pod="calico-kube-controllers-56879d8f96-w2m6s" WorkloadEndpoint="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:41.191299 systemd[1]: Started cri-containerd-aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7.scope - libcontainer container aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7. May 10 00:01:41.267117 sshd[5355]: pam_unix(sshd:session): session closed for user core May 10 00:01:41.285302 systemd[1]: sshd@13-172.31.18.167:22-147.75.109.163:52920.service: Deactivated successfully. May 10 00:01:41.292615 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:01:41.299606 systemd-logind[1910]: Session 14 logged out. Waiting for processes to exit. May 10 00:01:41.305040 systemd-logind[1910]: Removed session 14. May 10 00:01:41.318066 containerd[1935]: time="2025-05-10T00:01:41.315337108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:41.318066 containerd[1935]: time="2025-05-10T00:01:41.315469684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:41.324127 containerd[1935]: time="2025-05-10T00:01:41.315507904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:41.326850 containerd[1935]: time="2025-05-10T00:01:41.322320292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:41.485462 containerd[1935]: time="2025-05-10T00:01:41.485404217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5xkj,Uid:99d7a31f-c874-41d9-9abb-62b4e619c443,Namespace:calico-system,Attempt:1,} returns sandbox id \"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3\"" May 10 00:01:41.507953 systemd[1]: Started cri-containerd-7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29.scope - libcontainer container 7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29. May 10 00:01:41.510742 containerd[1935]: time="2025-05-10T00:01:41.510083969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnp79,Uid:4bfdbcc0-fe8b-4871-a536-daaca6c5cc23,Namespace:kube-system,Attempt:1,} returns sandbox id \"aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7\"" May 10 00:01:41.523225 containerd[1935]: time="2025-05-10T00:01:41.523155773Z" level=info msg="CreateContainer within sandbox \"aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:01:41.571276 containerd[1935]: time="2025-05-10T00:01:41.569828465Z" level=info msg="CreateContainer within sandbox \"aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"266ea96b18fb988a34453a82b867f627cf505eaa85b5b10b42b82d9010a131af\"" May 10 00:01:41.576690 containerd[1935]: time="2025-05-10T00:01:41.575204021Z" level=info msg="StartContainer for \"266ea96b18fb988a34453a82b867f627cf505eaa85b5b10b42b82d9010a131af\"" May 10 00:01:41.718978 systemd[1]: Started cri-containerd-266ea96b18fb988a34453a82b867f627cf505eaa85b5b10b42b82d9010a131af.scope - libcontainer container 266ea96b18fb988a34453a82b867f627cf505eaa85b5b10b42b82d9010a131af. May 10 00:01:41.758529 containerd[1935]: time="2025-05-10T00:01:41.758213070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56879d8f96-w2m6s,Uid:ec53ba6d-6936-42e6-ac4d-b9640fa50bea,Namespace:calico-system,Attempt:1,} returns sandbox id \"7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29\"" May 10 00:01:41.895710 containerd[1935]: time="2025-05-10T00:01:41.895504819Z" level=info msg="StartContainer for \"266ea96b18fb988a34453a82b867f627cf505eaa85b5b10b42b82d9010a131af\" returns successfully" May 10 00:01:41.990014 systemd-networkd[1846]: cali2e6659998bd: Gained IPv6LL May 10 00:01:42.311767 systemd-networkd[1846]: cali361b1fd048f: Gained IPv6LL May 10 00:01:42.566252 systemd-networkd[1846]: cali560ce8ca877: Gained IPv6LL May 10 00:01:42.849984 kubelet[3394]: I0510 00:01:42.849637 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nnp79" podStartSLOduration=40.84960908 podStartE2EDuration="40.84960908s" podCreationTimestamp="2025-05-10 00:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:42.801642835 +0000 UTC m=+53.872395448" watchObservedRunningTime="2025-05-10 00:01:42.84960908 +0000 UTC m=+53.920361669" May 10 00:01:42.972741 containerd[1935]: time="2025-05-10T00:01:42.972133436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:42.974912 containerd[1935]: time="2025-05-10T00:01:42.974862176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:01:42.976927 containerd[1935]: time="2025-05-10T00:01:42.976848212Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:42.981814 containerd[1935]: time="2025-05-10T00:01:42.981676880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:42.983940 containerd[1935]: time="2025-05-10T00:01:42.983518880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 6.087498006s" May 10 00:01:42.983940 containerd[1935]: time="2025-05-10T00:01:42.983589056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:01:42.987108 containerd[1935]: time="2025-05-10T00:01:42.986835536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:01:43.002456 containerd[1935]: time="2025-05-10T00:01:43.002047180Z" level=info msg="CreateContainer within sandbox \"7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:01:43.027467 containerd[1935]: time="2025-05-10T00:01:43.027390760Z" level=info msg="CreateContainer within sandbox \"7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"62826e0c4c833e45fa22872a11d6dfca21c550269c551140e55a1f5a937cfa8d\"" May 10 00:01:43.030280 containerd[1935]: time="2025-05-10T00:01:43.030077476Z" level=info msg="StartContainer for \"62826e0c4c833e45fa22872a11d6dfca21c550269c551140e55a1f5a937cfa8d\"" May 10 00:01:43.097925 systemd[1]: Started cri-containerd-62826e0c4c833e45fa22872a11d6dfca21c550269c551140e55a1f5a937cfa8d.scope - libcontainer container 62826e0c4c833e45fa22872a11d6dfca21c550269c551140e55a1f5a937cfa8d. May 10 00:01:43.248901 containerd[1935]: time="2025-05-10T00:01:43.248439558Z" level=info msg="StartContainer for \"62826e0c4c833e45fa22872a11d6dfca21c550269c551140e55a1f5a937cfa8d\" returns successfully" May 10 00:01:43.331125 containerd[1935]: time="2025-05-10T00:01:43.331044666Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:43.333452 containerd[1935]: time="2025-05-10T00:01:43.333083586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 00:01:43.338563 containerd[1935]: time="2025-05-10T00:01:43.338502294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 351.394694ms" May 10 00:01:43.339001 containerd[1935]: time="2025-05-10T00:01:43.338807238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:01:43.342587 containerd[1935]: time="2025-05-10T00:01:43.342533190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:01:43.346829 containerd[1935]: time="2025-05-10T00:01:43.345811590Z" level=info msg="CreateContainer within sandbox \"89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:01:43.398816 containerd[1935]: time="2025-05-10T00:01:43.398642634Z" level=info msg="CreateContainer within sandbox \"89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"15fa2244e307dff3b7018691374bb5a4c4ea1b4bf7e91e31fbded7cbbd5c4176\"" May 10 00:01:43.402494 containerd[1935]: time="2025-05-10T00:01:43.401539710Z" level=info msg="StartContainer for \"15fa2244e307dff3b7018691374bb5a4c4ea1b4bf7e91e31fbded7cbbd5c4176\"" May 10 00:01:43.464102 systemd[1]: Started cri-containerd-15fa2244e307dff3b7018691374bb5a4c4ea1b4bf7e91e31fbded7cbbd5c4176.scope - libcontainer container 15fa2244e307dff3b7018691374bb5a4c4ea1b4bf7e91e31fbded7cbbd5c4176. May 10 00:01:43.531529 containerd[1935]: time="2025-05-10T00:01:43.530383279Z" level=info msg="StartContainer for \"15fa2244e307dff3b7018691374bb5a4c4ea1b4bf7e91e31fbded7cbbd5c4176\" returns successfully" May 10 00:01:43.825411 kubelet[3394]: I0510 00:01:43.824286 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8444c9d58d-q5m7h" podStartSLOduration=27.733831106 podStartE2EDuration="33.824263388s" podCreationTimestamp="2025-05-10 00:01:10 +0000 UTC" firstStartedPulling="2025-05-10 00:01:36.895441394 +0000 UTC m=+47.966193995" lastFinishedPulling="2025-05-10 00:01:42.985873604 +0000 UTC m=+54.056626277" observedRunningTime="2025-05-10 00:01:43.79605626 +0000 UTC m=+54.866808873" watchObservedRunningTime="2025-05-10 00:01:43.824263388 +0000 UTC m=+54.895015989" May 10 00:01:43.825411 kubelet[3394]: I0510 00:01:43.824511 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8444c9d58d-t6jjx" podStartSLOduration=29.504788307 podStartE2EDuration="33.824501252s" podCreationTimestamp="2025-05-10 00:01:10 +0000 UTC" firstStartedPulling="2025-05-10 00:01:39.020573797 +0000 UTC m=+50.091326386" lastFinishedPulling="2025-05-10 00:01:43.340286742 +0000 UTC m=+54.411039331" observedRunningTime="2025-05-10 00:01:43.823859336 +0000 UTC m=+54.894611961" watchObservedRunningTime="2025-05-10 00:01:43.824501252 +0000 UTC m=+54.895253853" May 10 00:01:44.785690 kubelet[3394]: I0510 00:01:44.784683 3394 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:44.803173 ntpd[1903]: Listen normally on 8 vxlan.calico 192.168.75.128:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 8 vxlan.calico 192.168.75.128:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 9 vxlan.calico [fe80::6491:7fff:fe6e:ba90%4]:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 10 caliefb4d0bd527 [fe80::ecee:eeff:feee:eeee%7]:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 11 calibbcf15b207f [fe80::ecee:eeff:feee:eeee%8]:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 12 cali66bc3ef4261 [fe80::ecee:eeff:feee:eeee%9]:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 13 cali2e6659998bd [fe80::ecee:eeff:feee:eeee%10]:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 14 cali560ce8ca877 [fe80::ecee:eeff:feee:eeee%11]:123 May 10 00:01:44.805038 ntpd[1903]: 10 May 00:01:44 ntpd[1903]: Listen normally on 15 cali361b1fd048f [fe80::ecee:eeff:feee:eeee%12]:123 May 10 00:01:44.803295 ntpd[1903]: Listen normally on 9 vxlan.calico [fe80::6491:7fff:fe6e:ba90%4]:123 May 10 00:01:44.803379 ntpd[1903]: Listen normally on 10 caliefb4d0bd527 [fe80::ecee:eeff:feee:eeee%7]:123 May 10 00:01:44.803446 ntpd[1903]: Listen normally on 11 calibbcf15b207f [fe80::ecee:eeff:feee:eeee%8]:123 May 10 00:01:44.803512 ntpd[1903]: Listen normally on 12 cali66bc3ef4261 [fe80::ecee:eeff:feee:eeee%9]:123 May 10 00:01:44.804509 ntpd[1903]: Listen normally on 13 cali2e6659998bd [fe80::ecee:eeff:feee:eeee%10]:123 May 10 00:01:44.804589 ntpd[1903]: Listen normally on 14 cali560ce8ca877 [fe80::ecee:eeff:feee:eeee%11]:123 May 10 00:01:44.804656 ntpd[1903]: Listen normally on 15 cali361b1fd048f [fe80::ecee:eeff:feee:eeee%12]:123 May 10 00:01:45.093568 containerd[1935]: time="2025-05-10T00:01:45.092291695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:45.097074 containerd[1935]: time="2025-05-10T00:01:45.096920347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:01:45.099268 containerd[1935]: time="2025-05-10T00:01:45.099178351Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:45.108381 containerd[1935]: time="2025-05-10T00:01:45.107994499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:45.111045 containerd[1935]: time="2025-05-10T00:01:45.110971555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.768212093s" May 10 00:01:45.111308 containerd[1935]: time="2025-05-10T00:01:45.111239971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:01:45.114514 containerd[1935]: time="2025-05-10T00:01:45.113605495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 00:01:45.119038 containerd[1935]: time="2025-05-10T00:01:45.118985467Z" level=info msg="CreateContainer within sandbox \"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:01:45.174390 containerd[1935]: time="2025-05-10T00:01:45.174174895Z" level=info msg="CreateContainer within sandbox \"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2dfcb7fa2f97c6c9e8999ac5a427fe471a092063f1ecb588e7f7a4df94952130\"" May 10 00:01:45.178107 containerd[1935]: time="2025-05-10T00:01:45.177892051Z" level=info msg="StartContainer for \"2dfcb7fa2f97c6c9e8999ac5a427fe471a092063f1ecb588e7f7a4df94952130\"" May 10 00:01:45.268840 systemd[1]: Started cri-containerd-2dfcb7fa2f97c6c9e8999ac5a427fe471a092063f1ecb588e7f7a4df94952130.scope - libcontainer container 2dfcb7fa2f97c6c9e8999ac5a427fe471a092063f1ecb588e7f7a4df94952130. May 10 00:01:45.396000 containerd[1935]: time="2025-05-10T00:01:45.395636228Z" level=info msg="StartContainer for \"2dfcb7fa2f97c6c9e8999ac5a427fe471a092063f1ecb588e7f7a4df94952130\" returns successfully" May 10 00:01:46.333324 systemd[1]: Started sshd@14-172.31.18.167:22-147.75.109.163:52930.service - OpenSSH per-connection server daemon (147.75.109.163:52930). May 10 00:01:46.554260 sshd[5710]: Accepted publickey for core from 147.75.109.163 port 52930 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:46.561000 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:46.578334 systemd-logind[1910]: New session 15 of user core. May 10 00:01:46.585109 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:01:46.948088 sshd[5710]: pam_unix(sshd:session): session closed for user core May 10 00:01:46.961523 systemd[1]: sshd@14-172.31.18.167:22-147.75.109.163:52930.service: Deactivated successfully. May 10 00:01:46.969441 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:01:46.977848 systemd-logind[1910]: Session 15 logged out. Waiting for processes to exit. May 10 00:01:46.981305 systemd-logind[1910]: Removed session 15. May 10 00:01:47.867843 containerd[1935]: time="2025-05-10T00:01:47.866925289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:47.869793 containerd[1935]: time="2025-05-10T00:01:47.869704081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:01:47.870865 containerd[1935]: time="2025-05-10T00:01:47.870760945Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:47.878796 containerd[1935]: time="2025-05-10T00:01:47.878701873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.765037482s" May 10 00:01:47.879387 containerd[1935]: time="2025-05-10T00:01:47.879034609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:01:47.879387 containerd[1935]: time="2025-05-10T00:01:47.878991277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:47.883885 containerd[1935]: time="2025-05-10T00:01:47.883104385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:01:47.913420 containerd[1935]: time="2025-05-10T00:01:47.913217917Z" level=info msg="CreateContainer within sandbox \"7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:01:47.962058 containerd[1935]: time="2025-05-10T00:01:47.961974877Z" level=info msg="CreateContainer within sandbox \"7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"94ae96a92bf00c01db3377b4f394b1173c92b239ff21ff19902444e64dd62567\"" May 10 00:01:47.963210 containerd[1935]: time="2025-05-10T00:01:47.963036613Z" level=info msg="StartContainer for \"94ae96a92bf00c01db3377b4f394b1173c92b239ff21ff19902444e64dd62567\"" May 10 00:01:48.050012 systemd[1]: Started cri-containerd-94ae96a92bf00c01db3377b4f394b1173c92b239ff21ff19902444e64dd62567.scope - libcontainer container 94ae96a92bf00c01db3377b4f394b1173c92b239ff21ff19902444e64dd62567. May 10 00:01:48.173972 containerd[1935]: time="2025-05-10T00:01:48.173751850Z" level=info msg="StartContainer for \"94ae96a92bf00c01db3377b4f394b1173c92b239ff21ff19902444e64dd62567\" returns successfully" May 10 00:01:48.849061 kubelet[3394]: I0510 00:01:48.848373 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56879d8f96-w2m6s" podStartSLOduration=30.730415538 podStartE2EDuration="36.848348977s" podCreationTimestamp="2025-05-10 00:01:12 +0000 UTC" firstStartedPulling="2025-05-10 00:01:41.764106126 +0000 UTC m=+52.834858727" lastFinishedPulling="2025-05-10 00:01:47.882039517 +0000 UTC m=+58.952792166" observedRunningTime="2025-05-10 00:01:48.847038421 +0000 UTC m=+59.917791058" watchObservedRunningTime="2025-05-10 00:01:48.848348977 +0000 UTC m=+59.919101602" May 10 00:01:49.262415 containerd[1935]: time="2025-05-10T00:01:49.262312859Z" level=info msg="StopPodSandbox for \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\"" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.371 [WARNING][5792] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"872abefe-e998-4710-ae06-61521f1057d1", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a", Pod:"calico-apiserver-8444c9d58d-t6jjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66bc3ef4261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.372 [INFO][5792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.373 [INFO][5792] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" iface="eth0" netns="" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.373 [INFO][5792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.373 [INFO][5792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.438 [INFO][5800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.438 [INFO][5800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.438 [INFO][5800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.455 [WARNING][5800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.455 [INFO][5800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.459 [INFO][5800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:49.469933 containerd[1935]: 2025-05-10 00:01:49.465 [INFO][5792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.469933 containerd[1935]: time="2025-05-10T00:01:49.469737252Z" level=info msg="TearDown network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\" successfully" May 10 00:01:49.472336 containerd[1935]: time="2025-05-10T00:01:49.470761032Z" level=info msg="StopPodSandbox for \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\" returns successfully" May 10 00:01:49.473190 containerd[1935]: time="2025-05-10T00:01:49.472634197Z" level=info msg="RemovePodSandbox for \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\"" May 10 00:01:49.473190 containerd[1935]: time="2025-05-10T00:01:49.472698433Z" level=info msg="Forcibly stopping sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\"" May 10 00:01:49.567684 containerd[1935]: time="2025-05-10T00:01:49.567339601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:49.571987 containerd[1935]: time="2025-05-10T00:01:49.571906117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:01:49.574355 containerd[1935]: time="2025-05-10T00:01:49.574197373Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:49.584378 containerd[1935]: time="2025-05-10T00:01:49.584288497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:49.585968 containerd[1935]: time="2025-05-10T00:01:49.585897133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.702727636s" May 10 00:01:49.586321 containerd[1935]: time="2025-05-10T00:01:49.585985501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:01:49.594865 containerd[1935]: time="2025-05-10T00:01:49.594767881Z" level=info msg="CreateContainer within sandbox \"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:01:49.638495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74871889.mount: Deactivated successfully. May 10 00:01:49.644286 containerd[1935]: time="2025-05-10T00:01:49.644170597Z" level=info msg="CreateContainer within sandbox \"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0e43ace36d2167cba898cc1eff5ee5901c6a6a06b0930c7c9cd968de719610ad\"" May 10 00:01:49.645889 containerd[1935]: time="2025-05-10T00:01:49.645822589Z" level=info msg="StartContainer for \"0e43ace36d2167cba898cc1eff5ee5901c6a6a06b0930c7c9cd968de719610ad\"" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.581 [WARNING][5818] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"872abefe-e998-4710-ae06-61521f1057d1", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"89ffd05d868504f6a64539f33e638e46eb3995884e550b49138a484b58eef37a", Pod:"calico-apiserver-8444c9d58d-t6jjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66bc3ef4261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.581 [INFO][5818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.581 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" iface="eth0" netns="" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.581 [INFO][5818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.581 [INFO][5818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.657 [INFO][5826] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.658 [INFO][5826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.658 [INFO][5826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.690 [WARNING][5826] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.690 [INFO][5826] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" HandleID="k8s-pod-network.4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--t6jjx-eth0" May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.696 [INFO][5826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:49.707383 containerd[1935]: 2025-05-10 00:01:49.703 [INFO][5818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c" May 10 00:01:49.708330 containerd[1935]: time="2025-05-10T00:01:49.707476214Z" level=info msg="TearDown network for sandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\" successfully" May 10 00:01:49.726300 containerd[1935]: time="2025-05-10T00:01:49.725568194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:49.726300 containerd[1935]: time="2025-05-10T00:01:49.725682818Z" level=info msg="RemovePodSandbox \"4d65f89ea30636068b14472a147db6230b0d8b2406d663c025ff6e0d46a6158c\" returns successfully" May 10 00:01:49.731099 containerd[1935]: time="2025-05-10T00:01:49.731027798Z" level=info msg="StopPodSandbox for \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\"" May 10 00:01:49.745092 systemd[1]: Started cri-containerd-0e43ace36d2167cba898cc1eff5ee5901c6a6a06b0930c7c9cd968de719610ad.scope - libcontainer container 0e43ace36d2167cba898cc1eff5ee5901c6a6a06b0930c7c9cd968de719610ad. May 10 00:01:49.826851 containerd[1935]: time="2025-05-10T00:01:49.826060610Z" level=info msg="StartContainer for \"0e43ace36d2167cba898cc1eff5ee5901c6a6a06b0930c7c9cd968de719610ad\" returns successfully" May 10 00:01:49.887582 kubelet[3394]: I0510 00:01:49.887473 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g5xkj" podStartSLOduration=29.791846743 podStartE2EDuration="37.887448147s" podCreationTimestamp="2025-05-10 00:01:12 +0000 UTC" firstStartedPulling="2025-05-10 00:01:41.493545233 +0000 UTC m=+52.564297834" lastFinishedPulling="2025-05-10 00:01:49.589146637 +0000 UTC m=+60.659899238" observedRunningTime="2025-05-10 00:01:49.884838483 +0000 UTC m=+60.955591120" watchObservedRunningTime="2025-05-10 00:01:49.887448147 +0000 UTC m=+60.958200736" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.848 [WARNING][5858] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"462d92e2-f6b9-4559-82e4-3c84f91c0749", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c", Pod:"coredns-7db6d8ff4d-7496j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbcf15b207f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.850 [INFO][5858] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.850 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" iface="eth0" netns="" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.851 [INFO][5858] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.851 [INFO][5858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.941 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.943 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.943 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.975 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.978 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.984 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:49.995735 containerd[1935]: 2025-05-10 00:01:49.989 [INFO][5858] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:49.997860 containerd[1935]: time="2025-05-10T00:01:49.995737227Z" level=info msg="TearDown network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\" successfully" May 10 00:01:49.997860 containerd[1935]: time="2025-05-10T00:01:49.995815839Z" level=info msg="StopPodSandbox for \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\" returns successfully" May 10 00:01:49.997860 containerd[1935]: time="2025-05-10T00:01:49.997061523Z" level=info msg="RemovePodSandbox for \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\"" May 10 00:01:49.997860 containerd[1935]: time="2025-05-10T00:01:49.997114587Z" level=info msg="Forcibly stopping sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\"" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.113 [WARNING][5930] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"462d92e2-f6b9-4559-82e4-3c84f91c0749", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"518acabd98dad942981a3e29cde9b19b7a8c10734ca968c7d852b5df936a478c", Pod:"coredns-7db6d8ff4d-7496j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbcf15b207f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.113 [INFO][5930] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.113 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" iface="eth0" netns="" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.113 [INFO][5930] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.113 [INFO][5930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.154 [INFO][5937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.155 [INFO][5937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.155 [INFO][5937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.168 [WARNING][5937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.168 [INFO][5937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" HandleID="k8s-pod-network.9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--7496j-eth0" May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.170 [INFO][5937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:50.176129 containerd[1935]: 2025-05-10 00:01:50.173 [INFO][5930] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b" May 10 00:01:50.177464 containerd[1935]: time="2025-05-10T00:01:50.176872200Z" level=info msg="TearDown network for sandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\" successfully" May 10 00:01:50.184015 containerd[1935]: time="2025-05-10T00:01:50.183938520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:50.184195 containerd[1935]: time="2025-05-10T00:01:50.184113072Z" level=info msg="RemovePodSandbox \"9d00cdecee8cb80be220f24d21ed1ebd905b89fd3b748b3c0e0343d17e72ce3b\" returns successfully" May 10 00:01:50.185204 containerd[1935]: time="2025-05-10T00:01:50.185158776Z" level=info msg="StopPodSandbox for \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\"" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.252 [WARNING][5955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99d7a31f-c874-41d9-9abb-62b4e619c443", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3", Pod:"csi-node-driver-g5xkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e6659998bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.252 [INFO][5955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.252 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" iface="eth0" netns="" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.252 [INFO][5955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.252 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.289 [INFO][5962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.289 [INFO][5962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.289 [INFO][5962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.303 [WARNING][5962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.303 [INFO][5962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.306 [INFO][5962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:50.311169 containerd[1935]: 2025-05-10 00:01:50.308 [INFO][5955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.311169 containerd[1935]: time="2025-05-10T00:01:50.310998901Z" level=info msg="TearDown network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\" successfully" May 10 00:01:50.311169 containerd[1935]: time="2025-05-10T00:01:50.311034445Z" level=info msg="StopPodSandbox for \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\" returns successfully" May 10 00:01:50.312689 containerd[1935]: time="2025-05-10T00:01:50.311896417Z" level=info msg="RemovePodSandbox for \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\"" May 10 00:01:50.312689 containerd[1935]: time="2025-05-10T00:01:50.311971549Z" level=info msg="Forcibly stopping sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\"" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.378 [WARNING][5980] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99d7a31f-c874-41d9-9abb-62b4e619c443", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"abe05cddf6331e03ba01de205b8dd60df8b53479b05ee584581a81a3eeb76af3", Pod:"csi-node-driver-g5xkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e6659998bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.378 [INFO][5980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.378 [INFO][5980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" iface="eth0" netns="" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.378 [INFO][5980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.378 [INFO][5980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.415 [INFO][5987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.415 [INFO][5987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.415 [INFO][5987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.431 [WARNING][5987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.431 [INFO][5987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" HandleID="k8s-pod-network.fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" Workload="ip--172--31--18--167-k8s-csi--node--driver--g5xkj-eth0" May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.433 [INFO][5987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:50.439142 containerd[1935]: 2025-05-10 00:01:50.436 [INFO][5980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab" May 10 00:01:50.439142 containerd[1935]: time="2025-05-10T00:01:50.438901477Z" level=info msg="TearDown network for sandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\" successfully" May 10 00:01:50.448272 containerd[1935]: time="2025-05-10T00:01:50.448092481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:50.448272 containerd[1935]: time="2025-05-10T00:01:50.448203049Z" level=info msg="RemovePodSandbox \"fd5ece4ce80ab62d7eed14d7fb04cae95fc01937d7d43e8ff6a52f26f85b34ab\" returns successfully" May 10 00:01:50.449230 containerd[1935]: time="2025-05-10T00:01:50.448999729Z" level=info msg="StopPodSandbox for \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\"" May 10 00:01:50.504287 kubelet[3394]: I0510 00:01:50.504236 3394 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:01:50.504448 kubelet[3394]: I0510 00:01:50.504376 3394 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.533 [WARNING][6006] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d08f91a-f116-4037-bee3-3fae03414811", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b", Pod:"calico-apiserver-8444c9d58d-q5m7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefb4d0bd527", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.535 [INFO][6006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.536 [INFO][6006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" iface="eth0" netns="" May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.536 [INFO][6006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.536 [INFO][6006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.583 [INFO][6013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.583 [INFO][6013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.583 [INFO][6013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.597 [WARNING][6013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.597 [INFO][6013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.600 [INFO][6013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:50.605849 containerd[1935]: 2025-05-10 00:01:50.603 [INFO][6006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.607196 containerd[1935]: time="2025-05-10T00:01:50.605893730Z" level=info msg="TearDown network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\" successfully" May 10 00:01:50.607196 containerd[1935]: time="2025-05-10T00:01:50.605930954Z" level=info msg="StopPodSandbox for \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\" returns successfully" May 10 00:01:50.607859 containerd[1935]: time="2025-05-10T00:01:50.607373270Z" level=info msg="RemovePodSandbox for \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\"" May 10 00:01:50.607859 containerd[1935]: time="2025-05-10T00:01:50.607427078Z" level=info msg="Forcibly stopping sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\"" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.676 [WARNING][6031] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0", GenerateName:"calico-apiserver-8444c9d58d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d08f91a-f116-4037-bee3-3fae03414811", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8444c9d58d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"7a0cf64289510d62397965a9102b2625e0c698985e24f6b5d60bee3d4d2bf79b", Pod:"calico-apiserver-8444c9d58d-q5m7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefb4d0bd527", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.676 [INFO][6031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.676 [INFO][6031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" iface="eth0" netns="" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.676 [INFO][6031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.676 [INFO][6031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.718 [INFO][6038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.719 [INFO][6038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.719 [INFO][6038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.731 [WARNING][6038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.731 [INFO][6038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" HandleID="k8s-pod-network.8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" Workload="ip--172--31--18--167-k8s-calico--apiserver--8444c9d58d--q5m7h-eth0" May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.733 [INFO][6038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:50.738842 containerd[1935]: 2025-05-10 00:01:50.736 [INFO][6031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b" May 10 00:01:50.740243 containerd[1935]: time="2025-05-10T00:01:50.738874491Z" level=info msg="TearDown network for sandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\" successfully" May 10 00:01:50.745953 containerd[1935]: time="2025-05-10T00:01:50.745885539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:50.746093 containerd[1935]: time="2025-05-10T00:01:50.745986183Z" level=info msg="RemovePodSandbox \"8f1d94de94aa121c6b1c05a852fce85cb79956a86cf045d611f5b6760eace13b\" returns successfully" May 10 00:01:50.747319 containerd[1935]: time="2025-05-10T00:01:50.747018447Z" level=info msg="StopPodSandbox for \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\"" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.817 [WARNING][6056] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0", GenerateName:"calico-kube-controllers-56879d8f96-", Namespace:"calico-system", SelfLink:"", UID:"ec53ba6d-6936-42e6-ac4d-b9640fa50bea", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56879d8f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29", Pod:"calico-kube-controllers-56879d8f96-w2m6s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali361b1fd048f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.818 [INFO][6056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.818 [INFO][6056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" iface="eth0" netns="" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.818 [INFO][6056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.818 [INFO][6056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.855 [INFO][6063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.855 [INFO][6063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.855 [INFO][6063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.869 [WARNING][6063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.869 [INFO][6063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.871 [INFO][6063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:50.878031 containerd[1935]: 2025-05-10 00:01:50.875 [INFO][6056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:50.879148 containerd[1935]: time="2025-05-10T00:01:50.878955807Z" level=info msg="TearDown network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\" successfully" May 10 00:01:50.879148 containerd[1935]: time="2025-05-10T00:01:50.878999175Z" level=info msg="StopPodSandbox for \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\" returns successfully" May 10 00:01:50.880241 containerd[1935]: time="2025-05-10T00:01:50.879700719Z" level=info msg="RemovePodSandbox for \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\"" May 10 00:01:50.880241 containerd[1935]: time="2025-05-10T00:01:50.879747351Z" level=info msg="Forcibly stopping sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\"" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.947 [WARNING][6081] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0", GenerateName:"calico-kube-controllers-56879d8f96-", Namespace:"calico-system", SelfLink:"", UID:"ec53ba6d-6936-42e6-ac4d-b9640fa50bea", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56879d8f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"7c08ecd307f8b0cf0e45976e29444354c5451f1080af4b10dcc29c05d1f36a29", Pod:"calico-kube-controllers-56879d8f96-w2m6s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali361b1fd048f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.947 [INFO][6081] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.947 [INFO][6081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" iface="eth0" netns="" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.947 [INFO][6081] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.947 [INFO][6081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.989 [INFO][6088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.990 [INFO][6088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:50.990 [INFO][6088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:51.002 [WARNING][6088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:51.002 [INFO][6088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" HandleID="k8s-pod-network.88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" Workload="ip--172--31--18--167-k8s-calico--kube--controllers--56879d8f96--w2m6s-eth0" May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:51.008 [INFO][6088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:51.014525 containerd[1935]: 2025-05-10 00:01:51.011 [INFO][6081] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933" May 10 00:01:51.014525 containerd[1935]: time="2025-05-10T00:01:51.014387016Z" level=info msg="TearDown network for sandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\" successfully" May 10 00:01:51.021741 containerd[1935]: time="2025-05-10T00:01:51.021675252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:51.022164 containerd[1935]: time="2025-05-10T00:01:51.021790056Z" level=info msg="RemovePodSandbox \"88ef5cc980df2a63e92f81a14c4a9d4ea7ce6a068ead84d101cf1d1726dfe933\" returns successfully" May 10 00:01:51.022461 containerd[1935]: time="2025-05-10T00:01:51.022413360Z" level=info msg="StopPodSandbox for \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\"" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.100 [WARNING][6108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7", Pod:"coredns-7db6d8ff4d-nnp79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali560ce8ca877", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.100 [INFO][6108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.100 [INFO][6108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" iface="eth0" netns="" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.100 [INFO][6108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.100 [INFO][6108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.143 [INFO][6115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.144 [INFO][6115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.144 [INFO][6115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.157 [WARNING][6115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.157 [INFO][6115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.160 [INFO][6115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:51.165284 containerd[1935]: 2025-05-10 00:01:51.162 [INFO][6108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.166883 containerd[1935]: time="2025-05-10T00:01:51.165287041Z" level=info msg="TearDown network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\" successfully" May 10 00:01:51.166883 containerd[1935]: time="2025-05-10T00:01:51.165325297Z" level=info msg="StopPodSandbox for \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\" returns successfully" May 10 00:01:51.167229 containerd[1935]: time="2025-05-10T00:01:51.167076121Z" level=info msg="RemovePodSandbox for \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\"" May 10 00:01:51.167314 containerd[1935]: time="2025-05-10T00:01:51.167268133Z" level=info msg="Forcibly stopping sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\"" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.236 [WARNING][6133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4bfdbcc0-fe8b-4871-a536-daaca6c5cc23", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 1, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-167", ContainerID:"aaaaf5592a1bed6127f5267b3f59287ab4e53e98b598beeb1aa254f2514bd7a7", Pod:"coredns-7db6d8ff4d-nnp79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali560ce8ca877", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.236 [INFO][6133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.236 [INFO][6133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" iface="eth0" netns="" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.236 [INFO][6133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.236 [INFO][6133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.272 [INFO][6140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.273 [INFO][6140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.273 [INFO][6140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.288 [WARNING][6140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.288 [INFO][6140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" HandleID="k8s-pod-network.873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" Workload="ip--172--31--18--167-k8s-coredns--7db6d8ff4d--nnp79-eth0" May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.290 [INFO][6140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:51.295705 containerd[1935]: 2025-05-10 00:01:51.293 [INFO][6133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945" May 10 00:01:51.295705 containerd[1935]: time="2025-05-10T00:01:51.295671470Z" level=info msg="TearDown network for sandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\" successfully" May 10 00:01:51.302038 containerd[1935]: time="2025-05-10T00:01:51.301979762Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:51.302204 containerd[1935]: time="2025-05-10T00:01:51.302078870Z" level=info msg="RemovePodSandbox \"873d40b48b534e0b1d038087be36a52cfcd1082a0fa44a553f0db4f625151945\" returns successfully" May 10 00:01:51.987350 systemd[1]: Started sshd@15-172.31.18.167:22-147.75.109.163:57116.service - OpenSSH per-connection server daemon (147.75.109.163:57116). May 10 00:01:52.167685 sshd[6150]: Accepted publickey for core from 147.75.109.163 port 57116 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:52.171293 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:52.180167 systemd-logind[1910]: New session 16 of user core. May 10 00:01:52.187039 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:01:52.444231 sshd[6150]: pam_unix(sshd:session): session closed for user core May 10 00:01:52.452428 systemd[1]: sshd@15-172.31.18.167:22-147.75.109.163:57116.service: Deactivated successfully. May 10 00:01:52.457165 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:01:52.459764 systemd-logind[1910]: Session 16 logged out. Waiting for processes to exit. May 10 00:01:52.463009 systemd-logind[1910]: Removed session 16. May 10 00:01:57.493229 systemd[1]: Started sshd@16-172.31.18.167:22-147.75.109.163:35288.service - OpenSSH per-connection server daemon (147.75.109.163:35288). May 10 00:01:57.677802 sshd[6188]: Accepted publickey for core from 147.75.109.163 port 35288 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:57.680570 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:57.689911 systemd-logind[1910]: New session 17 of user core. May 10 00:01:57.697116 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:01:57.944504 sshd[6188]: pam_unix(sshd:session): session closed for user core May 10 00:01:57.951394 systemd[1]: sshd@16-172.31.18.167:22-147.75.109.163:35288.service: Deactivated successfully. May 10 00:01:57.955856 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:01:57.957364 systemd-logind[1910]: Session 17 logged out. Waiting for processes to exit. May 10 00:01:57.959663 systemd-logind[1910]: Removed session 17. May 10 00:02:02.986483 systemd[1]: Started sshd@17-172.31.18.167:22-147.75.109.163:35300.service - OpenSSH per-connection server daemon (147.75.109.163:35300). May 10 00:02:03.160183 sshd[6202]: Accepted publickey for core from 147.75.109.163 port 35300 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:03.162906 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:03.170932 systemd-logind[1910]: New session 18 of user core. May 10 00:02:03.183066 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:02:03.489328 sshd[6202]: pam_unix(sshd:session): session closed for user core May 10 00:02:03.501916 systemd[1]: sshd@17-172.31.18.167:22-147.75.109.163:35300.service: Deactivated successfully. May 10 00:02:03.512518 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:02:03.514883 systemd-logind[1910]: Session 18 logged out. Waiting for processes to exit. May 10 00:02:03.541515 systemd[1]: Started sshd@18-172.31.18.167:22-147.75.109.163:35306.service - OpenSSH per-connection server daemon (147.75.109.163:35306). May 10 00:02:03.544367 systemd-logind[1910]: Removed session 18. May 10 00:02:03.722125 sshd[6216]: Accepted publickey for core from 147.75.109.163 port 35306 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:03.725075 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:03.733946 systemd-logind[1910]: New session 19 of user core. May 10 00:02:03.739063 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:02:04.211140 sshd[6216]: pam_unix(sshd:session): session closed for user core May 10 00:02:04.216991 systemd[1]: sshd@18-172.31.18.167:22-147.75.109.163:35306.service: Deactivated successfully. May 10 00:02:04.217882 systemd-logind[1910]: Session 19 logged out. Waiting for processes to exit. May 10 00:02:04.221073 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:02:04.228479 systemd-logind[1910]: Removed session 19. May 10 00:02:04.249372 systemd[1]: Started sshd@19-172.31.18.167:22-147.75.109.163:35318.service - OpenSSH per-connection server daemon (147.75.109.163:35318). May 10 00:02:04.439819 sshd[6230]: Accepted publickey for core from 147.75.109.163 port 35318 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:04.442828 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:04.452917 systemd-logind[1910]: New session 20 of user core. May 10 00:02:04.458106 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:02:07.762724 sshd[6230]: pam_unix(sshd:session): session closed for user core May 10 00:02:07.773409 systemd[1]: sshd@19-172.31.18.167:22-147.75.109.163:35318.service: Deactivated successfully. May 10 00:02:07.784080 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:02:07.784539 systemd[1]: session-20.scope: Consumed 1.071s CPU time. May 10 00:02:07.787833 systemd-logind[1910]: Session 20 logged out. Waiting for processes to exit. May 10 00:02:07.804597 systemd[1]: Started sshd@20-172.31.18.167:22-147.75.109.163:39320.service - OpenSSH per-connection server daemon (147.75.109.163:39320). May 10 00:02:07.809121 systemd-logind[1910]: Removed session 20. May 10 00:02:07.999161 sshd[6277]: Accepted publickey for core from 147.75.109.163 port 39320 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:08.002833 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:08.012534 systemd-logind[1910]: New session 21 of user core. May 10 00:02:08.018043 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:02:08.565849 sshd[6277]: pam_unix(sshd:session): session closed for user core May 10 00:02:08.573664 systemd[1]: sshd@20-172.31.18.167:22-147.75.109.163:39320.service: Deactivated successfully. May 10 00:02:08.579554 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:02:08.582309 systemd-logind[1910]: Session 21 logged out. Waiting for processes to exit. May 10 00:02:08.584585 systemd-logind[1910]: Removed session 21. May 10 00:02:08.606304 systemd[1]: Started sshd@21-172.31.18.167:22-147.75.109.163:39328.service - OpenSSH per-connection server daemon (147.75.109.163:39328). May 10 00:02:08.780965 sshd[6288]: Accepted publickey for core from 147.75.109.163 port 39328 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:08.783963 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:08.793381 systemd-logind[1910]: New session 22 of user core. May 10 00:02:08.802147 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:02:09.044611 sshd[6288]: pam_unix(sshd:session): session closed for user core May 10 00:02:09.050936 systemd[1]: sshd@21-172.31.18.167:22-147.75.109.163:39328.service: Deactivated successfully. May 10 00:02:09.055616 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:02:09.057985 systemd-logind[1910]: Session 22 logged out. Waiting for processes to exit. May 10 00:02:09.060105 systemd-logind[1910]: Removed session 22. May 10 00:02:14.088477 systemd[1]: Started sshd@22-172.31.18.167:22-147.75.109.163:39340.service - OpenSSH per-connection server daemon (147.75.109.163:39340). May 10 00:02:14.258927 sshd[6300]: Accepted publickey for core from 147.75.109.163 port 39340 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:14.261853 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:14.270952 systemd-logind[1910]: New session 23 of user core. May 10 00:02:14.277194 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:02:14.537252 sshd[6300]: pam_unix(sshd:session): session closed for user core May 10 00:02:14.546333 systemd-logind[1910]: Session 23 logged out. Waiting for processes to exit. May 10 00:02:14.548542 systemd[1]: sshd@22-172.31.18.167:22-147.75.109.163:39340.service: Deactivated successfully. May 10 00:02:14.553870 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:02:14.557874 systemd-logind[1910]: Removed session 23. May 10 00:02:15.266155 kubelet[3394]: I0510 00:02:15.266089 3394 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:02:19.576364 systemd[1]: Started sshd@23-172.31.18.167:22-147.75.109.163:58566.service - OpenSSH per-connection server daemon (147.75.109.163:58566). May 10 00:02:19.752420 sshd[6324]: Accepted publickey for core from 147.75.109.163 port 58566 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:19.755219 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:19.768131 systemd-logind[1910]: New session 24 of user core. May 10 00:02:19.775192 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:02:20.034455 sshd[6324]: pam_unix(sshd:session): session closed for user core May 10 00:02:20.042826 systemd[1]: sshd@23-172.31.18.167:22-147.75.109.163:58566.service: Deactivated successfully. May 10 00:02:20.047253 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:02:20.049834 systemd-logind[1910]: Session 24 logged out. Waiting for processes to exit. May 10 00:02:20.054098 systemd-logind[1910]: Removed session 24. May 10 00:02:25.075326 systemd[1]: Started sshd@24-172.31.18.167:22-147.75.109.163:58580.service - OpenSSH per-connection server daemon (147.75.109.163:58580). May 10 00:02:25.268418 sshd[6358]: Accepted publickey for core from 147.75.109.163 port 58580 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:25.271328 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:25.283048 systemd-logind[1910]: New session 25 of user core. May 10 00:02:25.291124 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:02:25.574369 sshd[6358]: pam_unix(sshd:session): session closed for user core May 10 00:02:25.584870 systemd-logind[1910]: Session 25 logged out. Waiting for processes to exit. May 10 00:02:25.586248 systemd[1]: sshd@24-172.31.18.167:22-147.75.109.163:58580.service: Deactivated successfully. May 10 00:02:25.593413 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:02:25.596768 systemd-logind[1910]: Removed session 25. May 10 00:02:30.616374 systemd[1]: Started sshd@25-172.31.18.167:22-147.75.109.163:51304.service - OpenSSH per-connection server daemon (147.75.109.163:51304). May 10 00:02:30.806476 sshd[6371]: Accepted publickey for core from 147.75.109.163 port 51304 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:30.809489 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:30.823283 systemd-logind[1910]: New session 26 of user core. May 10 00:02:30.828317 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 00:02:31.081875 sshd[6371]: pam_unix(sshd:session): session closed for user core May 10 00:02:31.089088 systemd[1]: sshd@25-172.31.18.167:22-147.75.109.163:51304.service: Deactivated successfully. May 10 00:02:31.095745 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:02:31.100364 systemd-logind[1910]: Session 26 logged out. Waiting for processes to exit. May 10 00:02:31.103425 systemd-logind[1910]: Removed session 26. May 10 00:02:36.127302 systemd[1]: Started sshd@26-172.31.18.167:22-147.75.109.163:51314.service - OpenSSH per-connection server daemon (147.75.109.163:51314). May 10 00:02:36.301699 sshd[6407]: Accepted publickey for core from 147.75.109.163 port 51314 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:36.304704 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:36.312876 systemd-logind[1910]: New session 27 of user core. May 10 00:02:36.323087 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 00:02:36.563377 sshd[6407]: pam_unix(sshd:session): session closed for user core May 10 00:02:36.569101 systemd-logind[1910]: Session 27 logged out. Waiting for processes to exit. May 10 00:02:36.569557 systemd[1]: sshd@26-172.31.18.167:22-147.75.109.163:51314.service: Deactivated successfully. May 10 00:02:36.574276 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:02:36.578855 systemd-logind[1910]: Removed session 27. May 10 00:02:41.602368 systemd[1]: Started sshd@27-172.31.18.167:22-147.75.109.163:49376.service - OpenSSH per-connection server daemon (147.75.109.163:49376). May 10 00:02:41.776834 sshd[6419]: Accepted publickey for core from 147.75.109.163 port 49376 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:41.778948 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:41.786697 systemd-logind[1910]: New session 28 of user core. May 10 00:02:41.793055 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 00:02:42.039139 sshd[6419]: pam_unix(sshd:session): session closed for user core May 10 00:02:42.046675 systemd[1]: sshd@27-172.31.18.167:22-147.75.109.163:49376.service: Deactivated successfully. May 10 00:02:42.052842 systemd[1]: session-28.scope: Deactivated successfully. May 10 00:02:42.055023 systemd-logind[1910]: Session 28 logged out. Waiting for processes to exit. May 10 00:02:42.056841 systemd-logind[1910]: Removed session 28. May 10 00:02:55.643869 systemd[1]: cri-containerd-925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087.scope: Deactivated successfully. May 10 00:02:55.644877 systemd[1]: cri-containerd-925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087.scope: Consumed 7.625s CPU time. May 10 00:02:55.683580 containerd[1935]: time="2025-05-10T00:02:55.683460497Z" level=info msg="shim disconnected" id=925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087 namespace=k8s.io May 10 00:02:55.683580 containerd[1935]: time="2025-05-10T00:02:55.683549849Z" level=warning msg="cleaning up after shim disconnected" id=925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087 namespace=k8s.io May 10 00:02:55.683580 containerd[1935]: time="2025-05-10T00:02:55.683571353Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:02:55.690067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087-rootfs.mount: Deactivated successfully. May 10 00:02:56.070615 kubelet[3394]: I0510 00:02:56.070445 3394 scope.go:117] "RemoveContainer" containerID="925af831c9a363d12b6f1ac825119a01d14cc3c641faf3cfdaf350e26b23b087" May 10 00:02:56.075041 containerd[1935]: time="2025-05-10T00:02:56.074955987Z" level=info msg="CreateContainer within sandbox \"8174d897e4058c7575a26787080bf584298eebee171f2166571f0bc4730f09ba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 10 00:02:56.105194 containerd[1935]: time="2025-05-10T00:02:56.105090063Z" level=info msg="CreateContainer within sandbox \"8174d897e4058c7575a26787080bf584298eebee171f2166571f0bc4730f09ba\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b2723fddf642b0543ae09797d67f302fd60dd51a423a0c851f34af79f0b40d86\"" May 10 00:02:56.106184 containerd[1935]: time="2025-05-10T00:02:56.106115079Z" level=info msg="StartContainer for \"b2723fddf642b0543ae09797d67f302fd60dd51a423a0c851f34af79f0b40d86\"" May 10 00:02:56.167100 systemd[1]: Started cri-containerd-b2723fddf642b0543ae09797d67f302fd60dd51a423a0c851f34af79f0b40d86.scope - libcontainer container b2723fddf642b0543ae09797d67f302fd60dd51a423a0c851f34af79f0b40d86. May 10 00:02:56.215721 containerd[1935]: time="2025-05-10T00:02:56.215655376Z" level=info msg="StartContainer for \"b2723fddf642b0543ae09797d67f302fd60dd51a423a0c851f34af79f0b40d86\" returns successfully" May 10 00:02:57.199272 systemd[1]: cri-containerd-877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c.scope: Deactivated successfully. May 10 00:02:57.199756 systemd[1]: cri-containerd-877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c.scope: Consumed 5.164s CPU time, 21.8M memory peak, 0B memory swap peak. May 10 00:02:57.246400 containerd[1935]: time="2025-05-10T00:02:57.246230393Z" level=info msg="shim disconnected" id=877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c namespace=k8s.io May 10 00:02:57.246400 containerd[1935]: time="2025-05-10T00:02:57.246311201Z" level=warning msg="cleaning up after shim disconnected" id=877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c namespace=k8s.io May 10 00:02:57.246400 containerd[1935]: time="2025-05-10T00:02:57.246335561Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:02:57.250859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c-rootfs.mount: Deactivated successfully. May 10 00:02:58.081303 kubelet[3394]: I0510 00:02:58.081246 3394 scope.go:117] "RemoveContainer" containerID="877adf48c7bcc3fc458a367d0ef8d77a219cad06a1537e0f282d67a14535754c" May 10 00:02:58.085634 containerd[1935]: time="2025-05-10T00:02:58.085565153Z" level=info msg="CreateContainer within sandbox \"dd42eda7390683fcdf69171452ca946f95895b953726191ba6be20200e51a585\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 10 00:02:58.116889 containerd[1935]: time="2025-05-10T00:02:58.116761937Z" level=info msg="CreateContainer within sandbox \"dd42eda7390683fcdf69171452ca946f95895b953726191ba6be20200e51a585\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7ed81e4782895715997c6d819aed2bf3d4f5e13c15d0000b47ba2e880da74fc1\"" May 10 00:02:58.117558 containerd[1935]: time="2025-05-10T00:02:58.117507185Z" level=info msg="StartContainer for \"7ed81e4782895715997c6d819aed2bf3d4f5e13c15d0000b47ba2e880da74fc1\"" May 10 00:02:58.179107 systemd[1]: Started cri-containerd-7ed81e4782895715997c6d819aed2bf3d4f5e13c15d0000b47ba2e880da74fc1.scope - libcontainer container 7ed81e4782895715997c6d819aed2bf3d4f5e13c15d0000b47ba2e880da74fc1. May 10 00:02:58.252935 containerd[1935]: time="2025-05-10T00:02:58.252733422Z" level=info msg="StartContainer for \"7ed81e4782895715997c6d819aed2bf3d4f5e13c15d0000b47ba2e880da74fc1\" returns successfully" May 10 00:03:01.130966 systemd[1]: cri-containerd-b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03.scope: Deactivated successfully. May 10 00:03:01.134006 systemd[1]: cri-containerd-b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03.scope: Consumed 5.859s CPU time, 15.7M memory peak, 0B memory swap peak. May 10 00:03:01.175244 containerd[1935]: time="2025-05-10T00:03:01.175039797Z" level=info msg="shim disconnected" id=b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03 namespace=k8s.io May 10 00:03:01.176897 containerd[1935]: time="2025-05-10T00:03:01.176831961Z" level=warning msg="cleaning up after shim disconnected" id=b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03 namespace=k8s.io May 10 00:03:01.177006 containerd[1935]: time="2025-05-10T00:03:01.176909457Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:03:01.190525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03-rootfs.mount: Deactivated successfully. May 10 00:03:01.911217 kubelet[3394]: E0510 00:03:01.911148 3394 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) May 10 00:03:01.912061 kubelet[3394]: E0510 00:03:01.911246 3394 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" May 10 00:03:02.099719 kubelet[3394]: I0510 00:03:02.099077 3394 scope.go:117] "RemoveContainer" containerID="b565eadfb320e2c40583d273100b060e5097d0d1597a191fead321b77fe81e03" May 10 00:03:02.104480 containerd[1935]: time="2025-05-10T00:03:02.103966641Z" level=info msg="CreateContainer within sandbox \"1aa19a391e5798b4917669fc12d5c08af37dd50dac4e3659bbd25506a8e6ed19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 10 00:03:02.130482 containerd[1935]: time="2025-05-10T00:03:02.130410069Z" level=info msg="CreateContainer within sandbox \"1aa19a391e5798b4917669fc12d5c08af37dd50dac4e3659bbd25506a8e6ed19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"849e4f78adb558ef448320448666ef868580cb39e8e734b3e46fe91de124d972\"" May 10 00:03:02.131758 containerd[1935]: time="2025-05-10T00:03:02.131617377Z" level=info msg="StartContainer for \"849e4f78adb558ef448320448666ef868580cb39e8e734b3e46fe91de124d972\"" May 10 00:03:02.194982 systemd[1]: run-containerd-runc-k8s.io-849e4f78adb558ef448320448666ef868580cb39e8e734b3e46fe91de124d972-runc.QGSu9o.mount: Deactivated successfully. May 10 00:03:02.207094 systemd[1]: Started cri-containerd-849e4f78adb558ef448320448666ef868580cb39e8e734b3e46fe91de124d972.scope - libcontainer container 849e4f78adb558ef448320448666ef868580cb39e8e734b3e46fe91de124d972. May 10 00:03:02.272350 containerd[1935]: time="2025-05-10T00:03:02.272237446Z" level=info msg="StartContainer for \"849e4f78adb558ef448320448666ef868580cb39e8e734b3e46fe91de124d972\" returns successfully" May 10 00:03:11.911993 kubelet[3394]: E0510 00:03:11.911535 3394 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-167?timeout=10s\": context deadline exceeded"