Feb 13 16:04:28.216676 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 16:04:28.216721 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:34:20 -00 2025 Feb 13 16:04:28.216745 kernel: KASLR disabled due to lack of seed Feb 13 16:04:28.216762 kernel: efi: EFI v2.7 by EDK II Feb 13 16:04:28.216777 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 16:04:28.216793 kernel: ACPI: Early table checksum verification disabled Feb 13 16:04:28.216811 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 16:04:28.216826 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 16:04:28.216842 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 16:04:28.216858 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 16:04:28.216878 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 16:04:28.216894 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 16:04:28.216909 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 16:04:28.216925 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 16:04:28.216944 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 16:04:28.216965 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 16:04:28.216982 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 16:04:28.216998 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 16:04:28.217014 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 16:04:28.217030 kernel: printk: bootconsole [uart0] enabled Feb 13 16:04:28.217046 kernel: NUMA: Failed to initialise from firmware Feb 13 16:04:28.217063 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:04:28.217079 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 16:04:28.217095 kernel: Zone ranges: Feb 13 16:04:28.217111 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 16:04:28.217127 kernel: DMA32 empty Feb 13 16:04:28.217148 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 16:04:28.217214 kernel: Movable zone start for each node Feb 13 16:04:28.217234 kernel: Early memory node ranges Feb 13 16:04:28.217252 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 16:04:28.217268 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 16:04:28.217285 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 16:04:28.217302 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 16:04:28.217318 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 16:04:28.217335 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 16:04:28.217351 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 16:04:28.217367 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 16:04:28.217383 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:04:28.217406 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 16:04:28.217424 kernel: psci: probing for conduit method from ACPI. Feb 13 16:04:28.217448 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 16:04:28.217465 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 16:04:28.217483 kernel: psci: Trusted OS migration not required Feb 13 16:04:28.217505 kernel: psci: SMC Calling Convention v1.1 Feb 13 16:04:28.217523 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 16:04:28.217540 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 16:04:28.217558 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 16:04:28.217575 kernel: Detected PIPT I-cache on CPU0 Feb 13 16:04:28.217592 kernel: CPU features: detected: GIC system register CPU interface Feb 13 16:04:28.217610 kernel: CPU features: detected: Spectre-v2 Feb 13 16:04:28.217627 kernel: CPU features: detected: Spectre-v3a Feb 13 16:04:28.217646 kernel: CPU features: detected: Spectre-BHB Feb 13 16:04:28.217663 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 16:04:28.217681 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 16:04:28.217703 kernel: alternatives: applying boot alternatives Feb 13 16:04:28.217723 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:04:28.217743 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:04:28.217761 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:04:28.217778 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:04:28.217796 kernel: Fallback order for Node 0: 0 Feb 13 16:04:28.217813 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 16:04:28.217831 kernel: Policy zone: Normal Feb 13 16:04:28.217848 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:04:28.217865 kernel: software IO TLB: area num 2. Feb 13 16:04:28.217882 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 16:04:28.217906 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 16:04:28.217923 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:04:28.217941 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:04:28.217959 kernel: rcu: RCU event tracing is enabled. Feb 13 16:04:28.217977 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:04:28.217995 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:04:28.218012 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:04:28.218030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:04:28.218048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:04:28.218065 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 16:04:28.218082 kernel: GICv3: 96 SPIs implemented Feb 13 16:04:28.218104 kernel: GICv3: 0 Extended SPIs implemented Feb 13 16:04:28.218121 kernel: Root IRQ handler: gic_handle_irq Feb 13 16:04:28.218139 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 16:04:28.218157 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 16:04:28.219289 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 16:04:28.219309 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 16:04:28.219328 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 16:04:28.219345 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 16:04:28.219363 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 16:04:28.219380 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 16:04:28.219398 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:04:28.219416 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 16:04:28.219441 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 16:04:28.219459 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 16:04:28.219477 kernel: Console: colour dummy device 80x25 Feb 13 16:04:28.219495 kernel: printk: console [tty1] enabled Feb 13 16:04:28.219513 kernel: ACPI: Core revision 20230628 Feb 13 16:04:28.219531 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 16:04:28.219549 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:04:28.219567 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:04:28.219585 kernel: landlock: Up and running. Feb 13 16:04:28.219607 kernel: SELinux: Initializing. Feb 13 16:04:28.219626 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:04:28.219643 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:04:28.219661 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:04:28.219679 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:04:28.219697 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:04:28.219715 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:04:28.219733 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 16:04:28.219751 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 16:04:28.219774 kernel: Remapping and enabling EFI services. Feb 13 16:04:28.219792 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:04:28.219809 kernel: Detected PIPT I-cache on CPU1 Feb 13 16:04:28.219827 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 16:04:28.219845 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 16:04:28.219862 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 16:04:28.219880 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:04:28.219897 kernel: SMP: Total of 2 processors activated. Feb 13 16:04:28.219915 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 16:04:28.219960 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 16:04:28.219979 kernel: CPU features: detected: CRC32 instructions Feb 13 16:04:28.219997 kernel: CPU: All CPU(s) started at EL1 Feb 13 16:04:28.220028 kernel: alternatives: applying system-wide alternatives Feb 13 16:04:28.220053 kernel: devtmpfs: initialized Feb 13 16:04:28.220071 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:04:28.220090 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:04:28.220108 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:04:28.220127 kernel: SMBIOS 3.0.0 present. Feb 13 16:04:28.220146 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 16:04:28.220195 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:04:28.220216 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 16:04:28.220235 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 16:04:28.220254 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 16:04:28.220273 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:04:28.220291 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Feb 13 16:04:28.220309 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:04:28.220335 kernel: cpuidle: using governor menu Feb 13 16:04:28.220354 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 16:04:28.220372 kernel: ASID allocator initialised with 65536 entries Feb 13 16:04:28.220390 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:04:28.220408 kernel: Serial: AMBA PL011 UART driver Feb 13 16:04:28.220426 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 16:04:28.220445 kernel: Modules: 509040 pages in range for PLT usage Feb 13 16:04:28.220464 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:04:28.220482 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:04:28.220506 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 16:04:28.220524 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 16:04:28.220542 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:04:28.220560 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:04:28.220579 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 16:04:28.220597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 16:04:28.220615 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:04:28.220633 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:04:28.220652 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:04:28.220675 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:04:28.220694 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:04:28.220712 kernel: ACPI: Interpreter enabled Feb 13 16:04:28.220730 kernel: ACPI: Using GIC for interrupt routing Feb 13 16:04:28.220748 kernel: ACPI: MCFG table detected, 1 entries Feb 13 16:04:28.220766 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 16:04:28.221101 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:04:28.221521 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 16:04:28.221738 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 16:04:28.221946 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 16:04:28.222149 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 16:04:28.222242 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 16:04:28.222265 kernel: acpiphp: Slot [1] registered Feb 13 16:04:28.222284 kernel: acpiphp: Slot [2] registered Feb 13 16:04:28.222302 kernel: acpiphp: Slot [3] registered Feb 13 16:04:28.222321 kernel: acpiphp: Slot [4] registered Feb 13 16:04:28.222345 kernel: acpiphp: Slot [5] registered Feb 13 16:04:28.222364 kernel: acpiphp: Slot [6] registered Feb 13 16:04:28.222382 kernel: acpiphp: Slot [7] registered Feb 13 16:04:28.222401 kernel: acpiphp: Slot [8] registered Feb 13 16:04:28.222419 kernel: acpiphp: Slot [9] registered Feb 13 16:04:28.222437 kernel: acpiphp: Slot [10] registered Feb 13 16:04:28.222455 kernel: acpiphp: Slot [11] registered Feb 13 16:04:28.222473 kernel: acpiphp: Slot [12] registered Feb 13 16:04:28.222492 kernel: acpiphp: Slot [13] registered Feb 13 16:04:28.222510 kernel: acpiphp: Slot [14] registered Feb 13 16:04:28.222533 kernel: acpiphp: Slot [15] registered Feb 13 16:04:28.222552 kernel: acpiphp: Slot [16] registered Feb 13 16:04:28.222570 kernel: acpiphp: Slot [17] registered Feb 13 16:04:28.222588 kernel: acpiphp: Slot [18] registered Feb 13 16:04:28.222606 kernel: acpiphp: Slot [19] registered Feb 13 16:04:28.222624 kernel: acpiphp: Slot [20] registered Feb 13 16:04:28.222642 kernel: acpiphp: Slot [21] registered Feb 13 16:04:28.222661 kernel: acpiphp: Slot [22] registered Feb 13 16:04:28.222679 kernel: acpiphp: Slot [23] registered Feb 13 16:04:28.222701 kernel: acpiphp: Slot [24] registered Feb 13 16:04:28.222720 kernel: acpiphp: Slot [25] registered Feb 13 16:04:28.222738 kernel: acpiphp: Slot [26] registered Feb 13 16:04:28.222756 kernel: acpiphp: Slot [27] registered Feb 13 16:04:28.222775 kernel: acpiphp: Slot [28] registered Feb 13 16:04:28.222793 kernel: acpiphp: Slot [29] registered Feb 13 16:04:28.222811 kernel: acpiphp: Slot [30] registered Feb 13 16:04:28.222829 kernel: acpiphp: Slot [31] registered Feb 13 16:04:28.222847 kernel: PCI host bridge to bus 0000:00 Feb 13 16:04:28.223057 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 16:04:28.223277 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 16:04:28.223470 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 16:04:28.223658 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 16:04:28.223891 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 16:04:28.224148 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 16:04:28.224389 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 16:04:28.224619 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 16:04:28.224828 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 16:04:28.225035 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:04:28.225287 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 16:04:28.225500 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 16:04:28.225710 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 16:04:28.225922 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 16:04:28.226130 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:04:28.228547 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 16:04:28.228798 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 16:04:28.229009 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 16:04:28.230324 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 16:04:28.232563 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 16:04:28.232771 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 16:04:28.232973 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 16:04:28.233195 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 16:04:28.233225 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 16:04:28.233245 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 16:04:28.233264 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 16:04:28.233284 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 16:04:28.233303 kernel: iommu: Default domain type: Translated Feb 13 16:04:28.233322 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 16:04:28.233351 kernel: efivars: Registered efivars operations Feb 13 16:04:28.233372 kernel: vgaarb: loaded Feb 13 16:04:28.233391 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 16:04:28.233409 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:04:28.233428 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:04:28.233447 kernel: pnp: PnP ACPI init Feb 13 16:04:28.233697 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 16:04:28.233727 kernel: pnp: PnP ACPI: found 1 devices Feb 13 16:04:28.233753 kernel: NET: Registered PF_INET protocol family Feb 13 16:04:28.233772 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:04:28.233792 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 16:04:28.233811 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:04:28.233830 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:04:28.233849 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 16:04:28.233867 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 16:04:28.233886 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:04:28.233905 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:04:28.233928 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:04:28.233947 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:04:28.233965 kernel: kvm [1]: HYP mode not available Feb 13 16:04:28.233984 kernel: Initialise system trusted keyrings Feb 13 16:04:28.234004 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 16:04:28.234024 kernel: Key type asymmetric registered Feb 13 16:04:28.234042 kernel: Asymmetric key parser 'x509' registered Feb 13 16:04:28.234062 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 16:04:28.234082 kernel: io scheduler mq-deadline registered Feb 13 16:04:28.234107 kernel: io scheduler kyber registered Feb 13 16:04:28.234126 kernel: io scheduler bfq registered Feb 13 16:04:28.235634 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 16:04:28.235695 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 16:04:28.235715 kernel: ACPI: button: Power Button [PWRB] Feb 13 16:04:28.235734 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 16:04:28.235753 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 16:04:28.235772 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:04:28.235806 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 16:04:28.236069 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 16:04:28.236101 kernel: printk: console [ttyS0] disabled Feb 13 16:04:28.236122 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 16:04:28.236141 kernel: printk: console [ttyS0] enabled Feb 13 16:04:28.236203 kernel: printk: bootconsole [uart0] disabled Feb 13 16:04:28.236229 kernel: thunder_xcv, ver 1.0 Feb 13 16:04:28.236249 kernel: thunder_bgx, ver 1.0 Feb 13 16:04:28.236267 kernel: nicpf, ver 1.0 Feb 13 16:04:28.236294 kernel: nicvf, ver 1.0 Feb 13 16:04:28.236519 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 16:04:28.236712 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T16:04:27 UTC (1739462667) Feb 13 16:04:28.236738 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:04:28.236757 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 16:04:28.236776 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 16:04:28.236794 kernel: watchdog: Hard watchdog permanently disabled Feb 13 16:04:28.236813 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:04:28.236837 kernel: Segment Routing with IPv6 Feb 13 16:04:28.236856 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:04:28.236874 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:04:28.236892 kernel: Key type dns_resolver registered Feb 13 16:04:28.236911 kernel: registered taskstats version 1 Feb 13 16:04:28.236929 kernel: Loading compiled-in X.509 certificates Feb 13 16:04:28.236948 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: d3f151cc07005f6a29244b13ac54c8677429c8f5' Feb 13 16:04:28.236966 kernel: Key type .fscrypt registered Feb 13 16:04:28.236984 kernel: Key type fscrypt-provisioning registered Feb 13 16:04:28.237008 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:04:28.237027 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:04:28.237046 kernel: ima: No architecture policies found Feb 13 16:04:28.237065 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 16:04:28.237083 kernel: clk: Disabling unused clocks Feb 13 16:04:28.237102 kernel: Freeing unused kernel memory: 39360K Feb 13 16:04:28.237120 kernel: Run /init as init process Feb 13 16:04:28.237138 kernel: with arguments: Feb 13 16:04:28.237156 kernel: /init Feb 13 16:04:28.237219 kernel: with environment: Feb 13 16:04:28.237245 kernel: HOME=/ Feb 13 16:04:28.237264 kernel: TERM=linux Feb 13 16:04:28.237282 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:04:28.237304 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:04:28.237328 systemd[1]: Detected virtualization amazon. Feb 13 16:04:28.237348 systemd[1]: Detected architecture arm64. Feb 13 16:04:28.237368 systemd[1]: Running in initrd. Feb 13 16:04:28.237392 systemd[1]: No hostname configured, using default hostname. Feb 13 16:04:28.237412 systemd[1]: Hostname set to . Feb 13 16:04:28.237433 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:04:28.237453 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:04:28.237472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:28.237493 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:28.237514 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:04:28.237535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:04:28.237560 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:04:28.237581 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:04:28.237605 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:04:28.237627 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:04:28.237648 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:28.237668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:28.237690 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:04:28.237719 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:04:28.237739 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:04:28.237759 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:04:28.237780 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:04:28.237800 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:04:28.237821 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:04:28.237841 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:04:28.237862 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:28.237882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:28.237907 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:28.237927 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:04:28.237947 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:04:28.237968 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:04:28.237989 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:04:28.238009 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:04:28.238030 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:04:28.238050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:04:28.238075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:28.238096 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:04:28.238117 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:28.238138 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:04:28.238259 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 16:04:28.238316 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:04:28.238338 systemd-journald[251]: Journal started Feb 13 16:04:28.238382 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2c977cc69cda92753d466eb078b753) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:04:28.198811 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 16:04:28.244540 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:04:28.249195 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:04:28.252185 kernel: Bridge firewalling registered Feb 13 16:04:28.252277 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 16:04:28.255503 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:04:28.268849 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:28.275224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:28.286224 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:28.301592 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:28.314561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:04:28.323511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:04:28.333101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:28.354143 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:28.367437 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:04:28.373671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:28.382685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:28.398465 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:04:28.418895 dracut-cmdline[282]: dracut-dracut-053 Feb 13 16:04:28.426189 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:04:28.487668 systemd-resolved[287]: Positive Trust Anchors: Feb 13 16:04:28.487707 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:04:28.487773 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:04:28.621229 kernel: SCSI subsystem initialized Feb 13 16:04:28.628247 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:04:28.641300 kernel: iscsi: registered transport (tcp) Feb 13 16:04:28.663282 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:04:28.663356 kernel: QLogic iSCSI HBA Driver Feb 13 16:04:28.745132 kernel: random: crng init done Feb 13 16:04:28.743375 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 16:04:28.747250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:04:28.764158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:28.774239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:04:28.785518 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:04:28.827645 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:04:28.827724 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:04:28.827752 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:04:28.896222 kernel: raid6: neonx8 gen() 6553 MB/s Feb 13 16:04:28.913198 kernel: raid6: neonx4 gen() 6479 MB/s Feb 13 16:04:28.930202 kernel: raid6: neonx2 gen() 5266 MB/s Feb 13 16:04:28.947199 kernel: raid6: neonx1 gen() 3883 MB/s Feb 13 16:04:28.964201 kernel: raid6: int64x8 gen() 3804 MB/s Feb 13 16:04:28.981198 kernel: raid6: int64x4 gen() 3687 MB/s Feb 13 16:04:28.998197 kernel: raid6: int64x2 gen() 3555 MB/s Feb 13 16:04:29.016251 kernel: raid6: int64x1 gen() 2770 MB/s Feb 13 16:04:29.016289 kernel: raid6: using algorithm neonx8 gen() 6553 MB/s Feb 13 16:04:29.034980 kernel: raid6: .... xor() 4916 MB/s, rmw enabled Feb 13 16:04:29.035020 kernel: raid6: using neon recovery algorithm Feb 13 16:04:29.043702 kernel: xor: measuring software checksum speed Feb 13 16:04:29.043755 kernel: 8regs : 10974 MB/sec Feb 13 16:04:29.044950 kernel: 32regs : 11489 MB/sec Feb 13 16:04:29.047227 kernel: arm64_neon : 9005 MB/sec Feb 13 16:04:29.047263 kernel: xor: using function: 32regs (11489 MB/sec) Feb 13 16:04:29.130225 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:04:29.151493 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:04:29.161650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:29.203638 systemd-udevd[469]: Using default interface naming scheme 'v255'. Feb 13 16:04:29.212196 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:29.220465 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:04:29.268750 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Feb 13 16:04:29.326696 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:04:29.336467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:04:29.458276 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:29.471553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:04:29.519737 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:04:29.523556 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:04:29.534219 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:29.538671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:04:29.550672 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:04:29.600120 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:04:29.648640 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 16:04:29.648710 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 16:04:29.692979 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 16:04:29.693288 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 16:04:29.693566 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:02:5b:b2:2b:51 Feb 13 16:04:29.665221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:04:29.665495 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:29.668322 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:29.670444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:04:29.670735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:29.673390 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:29.680662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:29.707026 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:29.728376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:29.738224 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 16:04:29.738300 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 16:04:29.744484 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:29.755222 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 16:04:29.764037 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:04:29.764109 kernel: GPT:9289727 != 16777215 Feb 13 16:04:29.764144 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:04:29.764191 kernel: GPT:9289727 != 16777215 Feb 13 16:04:29.765101 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:04:29.766058 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:29.783885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:29.838586 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (519) Feb 13 16:04:29.870193 kernel: BTRFS: device fsid 39fc2625-8d65-490f-9a1f-39e365051e19 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (546) Feb 13 16:04:29.959855 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 16:04:29.990554 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 16:04:30.008425 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:04:30.022408 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 16:04:30.027541 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 16:04:30.039577 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:04:30.056454 disk-uuid[663]: Primary Header is updated. Feb 13 16:04:30.056454 disk-uuid[663]: Secondary Entries is updated. Feb 13 16:04:30.056454 disk-uuid[663]: Secondary Header is updated. Feb 13 16:04:30.068223 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:30.077207 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:30.085208 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:31.090213 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:31.091584 disk-uuid[664]: The operation has completed successfully. Feb 13 16:04:31.279999 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:04:31.281095 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:04:31.319456 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:04:31.330339 sh[1008]: Success Feb 13 16:04:31.356201 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 16:04:31.468677 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:04:31.474392 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:04:31.482358 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:04:31.516182 kernel: BTRFS info (device dm-0): first mount of filesystem 39fc2625-8d65-490f-9a1f-39e365051e19 Feb 13 16:04:31.516246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:31.516284 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:04:31.516828 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:04:31.518065 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:04:31.664210 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 16:04:31.689344 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:04:31.693228 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:04:31.703454 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:04:31.710441 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:04:31.734823 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:31.734916 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:31.734953 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:31.743647 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:31.762384 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:04:31.765578 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:31.778226 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:04:31.790610 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:04:31.896424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:04:31.910530 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:04:31.973374 systemd-networkd[1200]: lo: Link UP Feb 13 16:04:31.973392 systemd-networkd[1200]: lo: Gained carrier Feb 13 16:04:31.976539 systemd-networkd[1200]: Enumeration completed Feb 13 16:04:31.977256 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:04:31.977305 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:31.977311 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:04:31.984324 systemd[1]: Reached target network.target - Network. Feb 13 16:04:31.985961 systemd-networkd[1200]: eth0: Link UP Feb 13 16:04:31.985968 systemd-networkd[1200]: eth0: Gained carrier Feb 13 16:04:31.985986 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:32.029255 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.31.154/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:04:32.175532 ignition[1107]: Ignition 2.19.0 Feb 13 16:04:32.175562 ignition[1107]: Stage: fetch-offline Feb 13 16:04:32.177220 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:32.177260 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:32.177854 ignition[1107]: Ignition finished successfully Feb 13 16:04:32.186773 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:04:32.204586 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:04:32.229114 ignition[1212]: Ignition 2.19.0 Feb 13 16:04:32.229136 ignition[1212]: Stage: fetch Feb 13 16:04:32.229801 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:32.229910 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:32.230792 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:32.242127 ignition[1212]: PUT result: OK Feb 13 16:04:32.245010 ignition[1212]: parsed url from cmdline: "" Feb 13 16:04:32.245032 ignition[1212]: no config URL provided Feb 13 16:04:32.245049 ignition[1212]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:04:32.245075 ignition[1212]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:04:32.245109 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:32.252523 ignition[1212]: PUT result: OK Feb 13 16:04:32.252620 ignition[1212]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 16:04:32.254131 ignition[1212]: GET result: OK Feb 13 16:04:32.257592 ignition[1212]: parsing config with SHA512: 41541ab69fb0f74a4eede156fffad017415be4ab766a9c8c4475db2857d29322c02530447e552fc7062fabd64df324416b991756801b77962594bb3298f51fac Feb 13 16:04:32.266271 unknown[1212]: fetched base config from "system" Feb 13 16:04:32.266312 unknown[1212]: fetched base config from "system" Feb 13 16:04:32.266329 unknown[1212]: fetched user config from "aws" Feb 13 16:04:32.274537 ignition[1212]: fetch: fetch complete Feb 13 16:04:32.276238 ignition[1212]: fetch: fetch passed Feb 13 16:04:32.277688 ignition[1212]: Ignition finished successfully Feb 13 16:04:32.282696 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:04:32.294552 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:04:32.326117 ignition[1218]: Ignition 2.19.0 Feb 13 16:04:32.326149 ignition[1218]: Stage: kargs Feb 13 16:04:32.327597 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:32.327623 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:32.327784 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:32.331011 ignition[1218]: PUT result: OK Feb 13 16:04:32.338934 ignition[1218]: kargs: kargs passed Feb 13 16:04:32.339093 ignition[1218]: Ignition finished successfully Feb 13 16:04:32.343836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:04:32.355582 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:04:32.380521 ignition[1224]: Ignition 2.19.0 Feb 13 16:04:32.380554 ignition[1224]: Stage: disks Feb 13 16:04:32.381857 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:32.381885 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:32.382052 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:32.383678 ignition[1224]: PUT result: OK Feb 13 16:04:32.393093 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:04:32.389691 ignition[1224]: disks: disks passed Feb 13 16:04:32.396833 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:04:32.389883 ignition[1224]: Ignition finished successfully Feb 13 16:04:32.399707 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:04:32.403807 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:04:32.405742 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:04:32.407646 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:04:32.431090 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:04:32.472968 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:04:32.480369 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:04:32.491363 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:04:32.588206 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1daf3470-d909-4a02-84d2-f6d9b0a5b55c r/w with ordered data mode. Quota mode: none. Feb 13 16:04:32.589006 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:04:32.593098 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:04:32.611362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:04:32.618333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:04:32.622428 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 16:04:32.623385 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:04:32.623432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:04:32.644192 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1251) Feb 13 16:04:32.649201 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:32.649260 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:32.649287 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:32.657237 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:32.660211 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:04:32.660699 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:04:32.680538 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:04:33.128923 initrd-setup-root[1276]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:04:33.147592 initrd-setup-root[1283]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:04:33.156991 initrd-setup-root[1290]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:04:33.166106 initrd-setup-root[1297]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:04:33.455878 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:04:33.467343 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:04:33.473436 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:04:33.489927 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:04:33.494244 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:33.534800 ignition[1365]: INFO : Ignition 2.19.0 Feb 13 16:04:33.534800 ignition[1365]: INFO : Stage: mount Feb 13 16:04:33.538867 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:33.538867 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:33.538867 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:33.538867 ignition[1365]: INFO : PUT result: OK Feb 13 16:04:33.546051 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:04:33.552055 ignition[1365]: INFO : mount: mount passed Feb 13 16:04:33.552055 ignition[1365]: INFO : Ignition finished successfully Feb 13 16:04:33.557460 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:04:33.568494 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:04:33.599358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:04:33.620222 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1376) Feb 13 16:04:33.620351 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:33.623640 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:33.624839 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:33.631237 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:33.633489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:04:33.674057 ignition[1393]: INFO : Ignition 2.19.0 Feb 13 16:04:33.677037 ignition[1393]: INFO : Stage: files Feb 13 16:04:33.677037 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:33.677037 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:33.677037 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:33.685041 ignition[1393]: INFO : PUT result: OK Feb 13 16:04:33.688903 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:04:33.692979 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:04:33.692979 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:04:33.713990 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:04:33.716635 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:04:33.719647 unknown[1393]: wrote ssh authorized keys file for user: core Feb 13 16:04:33.721828 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:04:33.724570 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:04:33.728072 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 16:04:33.771572 systemd-networkd[1200]: eth0: Gained IPv6LL Feb 13 16:04:33.843698 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 16:04:34.019774 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:04:34.023364 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:04:34.026901 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:04:34.030227 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:04:34.043992 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:04:34.043992 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:04:34.050428 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:04:34.050428 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:04:34.050428 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:04:34.060825 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:04:34.060825 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:04:34.060825 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:34.060825 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:34.060825 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:34.060825 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 16:04:34.524497 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 16:04:34.898374 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:34.898374 ignition[1393]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:04:34.908259 ignition[1393]: INFO : files: files passed Feb 13 16:04:34.908259 ignition[1393]: INFO : Ignition finished successfully Feb 13 16:04:34.906282 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:04:34.914565 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:04:34.953597 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:04:34.973943 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:04:34.974143 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:04:34.995394 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:34.990890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:04:35.000708 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:35.000708 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:34.994087 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:04:35.017607 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:04:35.081133 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:04:35.081541 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:04:35.086331 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:04:35.088462 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:04:35.092820 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:04:35.108343 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:04:35.135355 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:04:35.144483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:04:35.175511 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:35.179943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:35.184398 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:04:35.196710 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:04:35.196979 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:04:35.201644 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:04:35.205859 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:04:35.208068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:04:35.215111 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:04:35.217944 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:04:35.223677 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:04:35.225758 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:04:35.228203 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:04:35.231000 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:04:35.234604 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:04:35.242894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:04:35.243382 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:04:35.250418 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:35.252992 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:35.258924 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:04:35.263944 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:35.266551 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:04:35.266782 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:04:35.274094 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:04:35.274441 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:04:35.283044 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:04:35.283542 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:04:35.300422 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:04:35.307364 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:04:35.310500 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:04:35.310889 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:35.314769 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:04:35.315008 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:04:35.351633 ignition[1445]: INFO : Ignition 2.19.0 Feb 13 16:04:35.351633 ignition[1445]: INFO : Stage: umount Feb 13 16:04:35.351633 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:35.351633 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:35.351633 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:35.364077 ignition[1445]: INFO : PUT result: OK Feb 13 16:04:35.359519 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:04:35.359762 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:04:35.375365 ignition[1445]: INFO : umount: umount passed Feb 13 16:04:35.377554 ignition[1445]: INFO : Ignition finished successfully Feb 13 16:04:35.380955 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:04:35.381622 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:04:35.389715 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:04:35.391659 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:04:35.395380 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:04:35.395477 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:04:35.397568 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:04:35.397653 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:04:35.397793 systemd[1]: Stopped target network.target - Network. Feb 13 16:04:35.398501 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:04:35.398584 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:04:35.400068 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:04:35.430036 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:04:35.433346 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:35.436534 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:04:35.438558 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:04:35.440448 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:04:35.440535 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:04:35.442415 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:04:35.442491 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:04:35.444375 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:04:35.444467 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:04:35.446320 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:04:35.446397 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:04:35.448564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:04:35.450468 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:04:35.455146 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:04:35.458282 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:04:35.458290 systemd-networkd[1200]: eth0: DHCPv6 lease lost Feb 13 16:04:35.460895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:04:35.468931 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:04:35.469623 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:04:35.472968 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:04:35.473255 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:04:35.482538 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:04:35.483003 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:35.492036 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:04:35.492145 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:04:35.506923 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:04:35.515064 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:04:35.516157 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:04:35.525963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:04:35.526067 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:35.528807 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:04:35.528910 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:35.554539 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:04:35.554632 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:35.558754 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:35.582433 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:04:35.587626 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:35.592793 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:04:35.592879 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:35.597833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:04:35.597938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:35.600104 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:04:35.600273 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:04:35.612546 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:04:35.612649 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:04:35.614922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:04:35.615002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:35.632511 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:04:35.636520 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:04:35.636636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:35.640071 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:04:35.640152 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:35.642830 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:04:35.642907 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:35.645421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:04:35.645504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:35.648634 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:04:35.648811 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:04:35.686879 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:04:35.687104 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:04:35.692566 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:04:35.702446 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:04:35.732431 systemd[1]: Switching root. Feb 13 16:04:35.786568 systemd-journald[251]: Journal stopped Feb 13 16:04:38.283102 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 16:04:38.283266 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:04:38.283313 kernel: SELinux: policy capability open_perms=1 Feb 13 16:04:38.283344 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:04:38.283375 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:04:38.283407 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:04:38.283447 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:04:38.283478 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:04:38.283508 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:04:38.283543 kernel: audit: type=1403 audit(1739462676.333:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:04:38.283585 systemd[1]: Successfully loaded SELinux policy in 60.292ms. Feb 13 16:04:38.283632 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.956ms. Feb 13 16:04:38.283667 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:04:38.283698 systemd[1]: Detected virtualization amazon. Feb 13 16:04:38.283739 systemd[1]: Detected architecture arm64. Feb 13 16:04:38.283773 systemd[1]: Detected first boot. Feb 13 16:04:38.283805 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:04:38.283860 zram_generator::config[1487]: No configuration found. Feb 13 16:04:38.283897 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:04:38.283930 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 16:04:38.283962 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 16:04:38.283994 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 16:04:38.284027 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:04:38.284059 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:04:38.284091 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:04:38.284123 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:04:38.284158 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:04:38.286704 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:04:38.286739 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:04:38.286773 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:04:38.286803 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:38.286835 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:38.286868 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:04:38.286899 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:04:38.286929 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:04:38.286968 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:04:38.286998 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:04:38.287027 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:38.287061 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 16:04:38.287093 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 16:04:38.287126 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 16:04:38.287346 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:04:38.287386 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:38.287425 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:04:38.287459 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:04:38.287490 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:04:38.287520 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:04:38.287552 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:04:38.287584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:38.287615 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:38.287646 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:38.287676 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:04:38.287710 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:04:38.287742 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:04:38.287772 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:04:38.287804 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:04:38.287853 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:04:38.287884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:04:38.287915 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:04:38.287950 systemd[1]: Reached target machines.target - Containers. Feb 13 16:04:38.287979 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:04:38.288014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:38.288044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:04:38.288073 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:04:38.288104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:38.288140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:04:38.289354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:38.289396 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:04:38.289426 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:38.289464 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:04:38.289494 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 16:04:38.289524 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 16:04:38.289555 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 16:04:38.289585 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 16:04:38.289614 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:04:38.289644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:04:38.289676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:04:38.289706 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:04:38.289740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:04:38.289772 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 16:04:38.289804 systemd[1]: Stopped verity-setup.service. Feb 13 16:04:38.289832 kernel: loop: module loaded Feb 13 16:04:38.289861 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:04:38.289890 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:04:38.289922 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:04:38.289951 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:04:38.289980 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:04:38.290014 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:04:38.290044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:38.290073 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:04:38.290103 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:04:38.290133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:38.290187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:38.290224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:38.290257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:38.290287 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:38.290317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:38.290349 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:04:38.290380 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:04:38.290415 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:04:38.290456 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:04:38.290531 systemd-journald[1572]: Collecting audit messages is disabled. Feb 13 16:04:38.290584 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:04:38.290617 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:04:38.290654 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:04:38.290686 systemd-journald[1572]: Journal started Feb 13 16:04:38.290732 systemd-journald[1572]: Runtime Journal (/run/log/journal/ec2c977cc69cda92753d466eb078b753) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:04:37.645135 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:04:37.728407 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 16:04:37.729333 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 16:04:38.315192 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:04:38.336189 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:04:38.342442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:38.355887 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:04:38.362310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:04:38.368220 kernel: fuse: init (API version 7.39) Feb 13 16:04:38.373239 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:04:38.380220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:04:38.394196 kernel: ACPI: bus type drm_connector registered Feb 13 16:04:38.399844 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:04:38.410420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:04:38.418193 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:04:38.423294 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:04:38.428907 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:04:38.429236 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:04:38.434891 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:04:38.439326 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:04:38.444930 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:38.450592 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:04:38.458307 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:04:38.486298 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:04:38.514688 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 16:04:38.524388 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:04:38.545200 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:04:38.564541 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:04:38.582899 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:04:38.589969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:04:38.597433 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:04:38.642270 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:04:38.664279 systemd-journald[1572]: Time spent on flushing to /var/log/journal/ec2c977cc69cda92753d466eb078b753 is 103.567ms for 913 entries. Feb 13 16:04:38.664279 systemd-journald[1572]: System Journal (/var/log/journal/ec2c977cc69cda92753d466eb078b753) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:04:38.781610 systemd-journald[1572]: Received client request to flush runtime journal. Feb 13 16:04:38.781713 kernel: loop1: detected capacity change from 0 to 52536 Feb 13 16:04:38.781751 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 16:04:38.660714 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Feb 13 16:04:38.660747 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Feb 13 16:04:38.706299 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:38.731632 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:04:38.743153 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:04:38.744651 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:04:38.750853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:38.757768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:38.778847 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:04:38.789394 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:04:38.839921 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 16:04:38.847648 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:04:38.864425 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:04:38.927846 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 16:04:38.927887 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 16:04:38.929221 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 16:04:38.947268 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:39.058222 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 16:04:39.077215 kernel: loop5: detected capacity change from 0 to 52536 Feb 13 16:04:39.095228 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 16:04:39.140208 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 16:04:39.156811 (sd-merge)[1644]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 16:04:39.158145 (sd-merge)[1644]: Merged extensions into '/usr'. Feb 13 16:04:39.169591 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:04:39.169648 systemd[1]: Reloading... Feb 13 16:04:39.334206 zram_generator::config[1670]: No configuration found. Feb 13 16:04:39.729266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:04:39.846300 systemd[1]: Reloading finished in 675 ms. Feb 13 16:04:39.883520 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:04:39.886982 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:04:39.903626 systemd[1]: Starting ensure-sysext.service... Feb 13 16:04:39.918494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:04:39.936561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:39.944386 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:04:39.944421 systemd[1]: Reloading... Feb 13 16:04:39.989949 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:04:40.003078 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:04:40.008385 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:04:40.009087 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Feb 13 16:04:40.009268 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Feb 13 16:04:40.023000 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:04:40.023038 systemd-tmpfiles[1723]: Skipping /boot Feb 13 16:04:40.063878 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:04:40.063906 systemd-tmpfiles[1723]: Skipping /boot Feb 13 16:04:40.133886 zram_generator::config[1751]: No configuration found. Feb 13 16:04:40.152598 systemd-udevd[1724]: Using default interface naming scheme 'v255'. Feb 13 16:04:40.169077 ldconfig[1586]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:04:40.385380 (udev-worker)[1785]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:40.494522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:04:40.647870 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 16:04:40.648691 systemd[1]: Reloading finished in 703 ms. Feb 13 16:04:40.676247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:40.679505 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:04:40.689149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:40.709191 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1793) Feb 13 16:04:40.742623 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:40.763860 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:04:40.774541 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:04:40.823760 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:04:40.833681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:04:40.841536 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:04:40.850434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:40.867824 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:04:40.877093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:40.885760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:40.897920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:40.914681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:40.916898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:40.921347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:40.922269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:40.949062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:40.956123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:40.958685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:40.976973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:41.007078 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:04:41.009367 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:41.009776 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:04:41.015239 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:04:41.020245 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:04:41.038635 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:04:41.041378 systemd[1]: Finished ensure-sysext.service. Feb 13 16:04:41.044030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:41.046276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:41.053949 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:04:41.094567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:04:41.100226 augenrules[1933]: No rules Feb 13 16:04:41.100752 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:04:41.101931 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:41.102291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:41.113606 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:41.136960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:41.137440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:41.138577 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:04:41.151579 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:04:41.151954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:04:41.169720 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:04:41.254769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:04:41.255548 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:04:41.279860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:04:41.291421 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:04:41.317979 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:04:41.332695 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:41.361190 lvm[1975]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:04:41.365783 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:04:41.395999 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:04:41.400781 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:41.415957 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:04:41.457129 lvm[1981]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:04:41.459117 systemd-networkd[1859]: lo: Link UP Feb 13 16:04:41.459133 systemd-networkd[1859]: lo: Gained carrier Feb 13 16:04:41.462335 systemd-networkd[1859]: Enumeration completed Feb 13 16:04:41.462641 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:04:41.467635 systemd-networkd[1859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:41.468328 systemd-networkd[1859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:04:41.473553 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:04:41.476952 systemd-networkd[1859]: eth0: Link UP Feb 13 16:04:41.477885 systemd-networkd[1859]: eth0: Gained carrier Feb 13 16:04:41.477925 systemd-networkd[1859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:41.494345 systemd-networkd[1859]: eth0: DHCPv4 address 172.31.31.154/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:04:41.498513 systemd-resolved[1884]: Positive Trust Anchors: Feb 13 16:04:41.499017 systemd-resolved[1884]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:04:41.499239 systemd-resolved[1884]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:04:41.509083 systemd-resolved[1884]: Defaulting to hostname 'linux'. Feb 13 16:04:41.512385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:04:41.514874 systemd[1]: Reached target network.target - Network. Feb 13 16:04:41.516762 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:41.518952 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:04:41.521086 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:04:41.523388 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:04:41.526093 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:04:41.528517 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:04:41.530958 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:04:41.533277 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:04:41.533331 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:04:41.535015 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:04:41.537927 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:04:41.542832 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:04:41.553855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:04:41.557427 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:04:41.560158 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:04:41.563668 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:04:41.565878 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:04:41.568214 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:04:41.568272 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:04:41.576365 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:04:41.591504 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:04:41.597538 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:04:41.611222 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:04:41.618130 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:04:41.621866 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:04:41.633564 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:04:41.646314 jq[1990]: false Feb 13 16:04:41.652872 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 16:04:41.659855 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 16:04:41.665946 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 16:04:41.671783 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:04:41.695630 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:04:41.710477 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:04:41.715497 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:04:41.716824 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:04:41.719007 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:04:41.725122 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:04:41.732497 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:04:41.734317 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:04:41.793215 dbus-daemon[1989]: [system] SELinux support is enabled Feb 13 16:04:41.793923 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:04:41.803414 dbus-daemon[1989]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1859 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 16:04:41.803837 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:04:41.806888 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 16:04:41.803887 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:04:41.807370 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:04:41.807410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:04:41.842551 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 16:04:41.845084 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:04:41.846575 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:04:41.898469 update_engine[2004]: I20250213 16:04:41.888635 2004 main.cc:92] Flatcar Update Engine starting Feb 13 16:04:41.898469 update_engine[2004]: I20250213 16:04:41.896930 2004 update_check_scheduler.cc:74] Next update check in 8m45s Feb 13 16:04:41.888054 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:04:41.889516 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:04:41.896318 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch failed with 404: resource not found Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.910 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.911 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.911 INFO Fetch successful Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.911 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 16:04:41.918738 coreos-metadata[1988]: Feb 13 16:04:41.911 INFO Fetch successful Feb 13 16:04:41.934418 tar[2010]: linux-arm64/helm Feb 13 16:04:41.919678 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:04:41.946149 jq[2005]: true Feb 13 16:04:41.963697 extend-filesystems[1991]: Found loop4 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found loop5 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found loop6 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found loop7 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1p1 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1p2 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1p3 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found usr Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1p4 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1p6 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1p7 Feb 13 16:04:41.963697 extend-filesystems[1991]: Found nvme0n1p9 Feb 13 16:04:41.963697 extend-filesystems[1991]: Checking size of /dev/nvme0n1p9 Feb 13 16:04:42.003538 (ntainerd)[2027]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: ---------------------------------------------------- Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: corporation. Support and training for ntp-4 are Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: available at https://www.nwtime.org/support Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: ---------------------------------------------------- Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: proto: precision = 0.096 usec (-23) Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: basedate set to 2025-02-01 Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: gps base set to 2025-02-02 (week 2352) Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:04:42.025482 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:04:42.007521 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:04:42.007575 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Listen normally on 3 eth0 172.31.31.154:123 Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Listen normally on 4 lo [::1]:123 Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: bind(21) AF_INET6 fe80::402:5bff:feb2:2b51%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: unable to create socket on eth0 (5) for fe80::402:5bff:feb2:2b51%2#123 Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: failed to init interface for address fe80::402:5bff:feb2:2b51%2 Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.038125 ntpd[1993]: 13 Feb 16:04:42 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.007596 ntpd[1993]: ---------------------------------------------------- Feb 13 16:04:42.040986 jq[2036]: true Feb 13 16:04:42.007615 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:04:42.007634 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:04:42.007652 ntpd[1993]: corporation. Support and training for ntp-4 are Feb 13 16:04:42.007676 ntpd[1993]: available at https://www.nwtime.org/support Feb 13 16:04:42.007695 ntpd[1993]: ---------------------------------------------------- Feb 13 16:04:42.011453 ntpd[1993]: proto: precision = 0.096 usec (-23) Feb 13 16:04:42.014278 ntpd[1993]: basedate set to 2025-02-01 Feb 13 16:04:42.014320 ntpd[1993]: gps base set to 2025-02-02 (week 2352) Feb 13 16:04:42.022759 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:04:42.022857 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:04:42.029055 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:04:42.029130 ntpd[1993]: Listen normally on 3 eth0 172.31.31.154:123 Feb 13 16:04:42.029242 ntpd[1993]: Listen normally on 4 lo [::1]:123 Feb 13 16:04:42.029327 ntpd[1993]: bind(21) AF_INET6 fe80::402:5bff:feb2:2b51%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:04:42.029371 ntpd[1993]: unable to create socket on eth0 (5) for fe80::402:5bff:feb2:2b51%2#123 Feb 13 16:04:42.029398 ntpd[1993]: failed to init interface for address fe80::402:5bff:feb2:2b51%2 Feb 13 16:04:42.029454 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Feb 13 16:04:42.036153 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.036238 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.089924 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 16:04:42.104276 extend-filesystems[1991]: Resized partition /dev/nvme0n1p9 Feb 13 16:04:42.121672 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:04:42.130268 extend-filesystems[2049]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:04:42.148762 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:04:42.151233 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:04:42.167215 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 16:04:42.270209 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 16:04:42.290430 systemd-logind[2001]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 16:04:42.290489 systemd-logind[2001]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 16:04:42.295623 extend-filesystems[2049]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 16:04:42.295623 extend-filesystems[2049]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 16:04:42.295623 extend-filesystems[2049]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 16:04:42.293488 systemd-logind[2001]: New seat seat0. Feb 13 16:04:42.330314 bash[2070]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:04:42.330520 extend-filesystems[1991]: Resized filesystem in /dev/nvme0n1p9 Feb 13 16:04:42.298614 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:04:42.306092 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:04:42.308741 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:04:42.335235 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:04:42.403813 systemd[1]: Starting sshkeys.service... Feb 13 16:04:42.441192 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1793) Feb 13 16:04:42.461250 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:04:42.497766 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 16:04:42.500502 dbus-daemon[1989]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2020 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 16:04:42.510413 locksmithd[2026]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:04:42.512067 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:04:42.515037 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 16:04:42.530957 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 16:04:42.582261 polkitd[2118]: Started polkitd version 121 Feb 13 16:04:42.607593 polkitd[2118]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 16:04:42.607800 polkitd[2118]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 16:04:42.612985 polkitd[2118]: Finished loading, compiling and executing 2 rules Feb 13 16:04:42.616529 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 16:04:42.616859 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 16:04:42.621856 polkitd[2118]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 16:04:42.668055 systemd-hostnamed[2020]: Hostname set to (transient) Feb 13 16:04:42.668252 systemd-resolved[1884]: System hostname changed to 'ip-172-31-31-154'. Feb 13 16:04:42.769385 coreos-metadata[2095]: Feb 13 16:04:42.768 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:04:42.773223 coreos-metadata[2095]: Feb 13 16:04:42.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 16:04:42.773223 coreos-metadata[2095]: Feb 13 16:04:42.772 INFO Fetch successful Feb 13 16:04:42.773223 coreos-metadata[2095]: Feb 13 16:04:42.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 16:04:42.774116 coreos-metadata[2095]: Feb 13 16:04:42.774 INFO Fetch successful Feb 13 16:04:42.789462 unknown[2095]: wrote ssh authorized keys file for user: core Feb 13 16:04:42.825420 containerd[2027]: time="2025-02-13T16:04:42.823430604Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 16:04:42.857133 update-ssh-keys[2165]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:04:42.862650 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:04:42.870242 systemd[1]: Finished sshkeys.service. Feb 13 16:04:43.008310 ntpd[1993]: bind(24) AF_INET6 fe80::402:5bff:feb2:2b51%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:04:43.009646 ntpd[1993]: 13 Feb 16:04:43 ntpd[1993]: bind(24) AF_INET6 fe80::402:5bff:feb2:2b51%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:04:43.009646 ntpd[1993]: 13 Feb 16:04:43 ntpd[1993]: unable to create socket on eth0 (6) for fe80::402:5bff:feb2:2b51%2#123 Feb 13 16:04:43.009646 ntpd[1993]: 13 Feb 16:04:43 ntpd[1993]: failed to init interface for address fe80::402:5bff:feb2:2b51%2 Feb 13 16:04:43.008384 ntpd[1993]: unable to create socket on eth0 (6) for fe80::402:5bff:feb2:2b51%2#123 Feb 13 16:04:43.008413 ntpd[1993]: failed to init interface for address fe80::402:5bff:feb2:2b51%2 Feb 13 16:04:43.063861 containerd[2027]: time="2025-02-13T16:04:43.063496473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.070297 containerd[2027]: time="2025-02-13T16:04:43.070215873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.070297 containerd[2027]: time="2025-02-13T16:04:43.070288173Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:04:43.070449 containerd[2027]: time="2025-02-13T16:04:43.070325085Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.070617597Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.070664049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.070786965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.070816341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.071104725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.071137041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.071189973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.071219877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.071393877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.071815569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072208 containerd[2027]: time="2025-02-13T16:04:43.072006873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.072714 containerd[2027]: time="2025-02-13T16:04:43.072038637Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:04:43.075917 containerd[2027]: time="2025-02-13T16:04:43.075857709Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:04:43.076118 containerd[2027]: time="2025-02-13T16:04:43.076058541Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:04:43.082448 containerd[2027]: time="2025-02-13T16:04:43.082387689Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:04:43.082556 containerd[2027]: time="2025-02-13T16:04:43.082497561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:04:43.082604 containerd[2027]: time="2025-02-13T16:04:43.082538805Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:04:43.082679 containerd[2027]: time="2025-02-13T16:04:43.082606437Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:04:43.082679 containerd[2027]: time="2025-02-13T16:04:43.082642749Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:04:43.082929 containerd[2027]: time="2025-02-13T16:04:43.082888065Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083401605Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083618733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083651061Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083690049Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083725185Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083774361Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083813361Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083844513Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083878077Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083909181Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083937729Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.083970465Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.084010401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.085179 containerd[2027]: time="2025-02-13T16:04:43.084041709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.085790 containerd[2027]: time="2025-02-13T16:04:43.084071565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.085790 containerd[2027]: time="2025-02-13T16:04:43.084102117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.085790 containerd[2027]: time="2025-02-13T16:04:43.084131277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086695 containerd[2027]: time="2025-02-13T16:04:43.086647365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086757 containerd[2027]: time="2025-02-13T16:04:43.086702841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086757 containerd[2027]: time="2025-02-13T16:04:43.086738517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086841 containerd[2027]: time="2025-02-13T16:04:43.086773293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086841 containerd[2027]: time="2025-02-13T16:04:43.086808201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086924 containerd[2027]: time="2025-02-13T16:04:43.086841657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086924 containerd[2027]: time="2025-02-13T16:04:43.086870973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.086924 containerd[2027]: time="2025-02-13T16:04:43.086903169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.087149 containerd[2027]: time="2025-02-13T16:04:43.086949297Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:04:43.087149 containerd[2027]: time="2025-02-13T16:04:43.087002889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.087149 containerd[2027]: time="2025-02-13T16:04:43.087033729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.087149 containerd[2027]: time="2025-02-13T16:04:43.087069117Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:04:43.087361 containerd[2027]: time="2025-02-13T16:04:43.087213309Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:04:43.087361 containerd[2027]: time="2025-02-13T16:04:43.087254133Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:04:43.087361 containerd[2027]: time="2025-02-13T16:04:43.087280509Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:04:43.090210 containerd[2027]: time="2025-02-13T16:04:43.087309213Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:04:43.090210 containerd[2027]: time="2025-02-13T16:04:43.088704669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.090210 containerd[2027]: time="2025-02-13T16:04:43.088751601Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:04:43.090210 containerd[2027]: time="2025-02-13T16:04:43.088776813Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:04:43.090210 containerd[2027]: time="2025-02-13T16:04:43.088801809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.090502 containerd[2027]: time="2025-02-13T16:04:43.089324301Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:04:43.090502 containerd[2027]: time="2025-02-13T16:04:43.089429973Z" level=info msg="Connect containerd service" Feb 13 16:04:43.090502 containerd[2027]: time="2025-02-13T16:04:43.089484645Z" level=info msg="using legacy CRI server" Feb 13 16:04:43.090502 containerd[2027]: time="2025-02-13T16:04:43.089502309Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:04:43.090502 containerd[2027]: time="2025-02-13T16:04:43.089644413Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:04:43.092281 containerd[2027]: time="2025-02-13T16:04:43.092223729Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:04:43.093048 containerd[2027]: time="2025-02-13T16:04:43.092542977Z" level=info msg="Start subscribing containerd event" Feb 13 16:04:43.093048 containerd[2027]: time="2025-02-13T16:04:43.092632089Z" level=info msg="Start recovering state" Feb 13 16:04:43.093048 containerd[2027]: time="2025-02-13T16:04:43.092755089Z" level=info msg="Start event monitor" Feb 13 16:04:43.093048 containerd[2027]: time="2025-02-13T16:04:43.092779821Z" level=info msg="Start snapshots syncer" Feb 13 16:04:43.093048 containerd[2027]: time="2025-02-13T16:04:43.092800905Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:04:43.093048 containerd[2027]: time="2025-02-13T16:04:43.092819661Z" level=info msg="Start streaming server" Feb 13 16:04:43.094629 containerd[2027]: time="2025-02-13T16:04:43.094579821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:04:43.096180 containerd[2027]: time="2025-02-13T16:04:43.095246553Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:04:43.101211 containerd[2027]: time="2025-02-13T16:04:43.097885509Z" level=info msg="containerd successfully booted in 0.287397s" Feb 13 16:04:43.098025 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:04:43.375603 sshd_keygen[2034]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:04:43.401994 tar[2010]: linux-arm64/LICENSE Feb 13 16:04:43.402636 tar[2010]: linux-arm64/README.md Feb 13 16:04:43.431293 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:04:43.434984 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 16:04:43.446732 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:04:43.463576 systemd[1]: Started sshd@0-172.31.31.154:22-139.178.68.195:50738.service - OpenSSH per-connection server daemon (139.178.68.195:50738). Feb 13 16:04:43.476253 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:04:43.476706 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:04:43.486652 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:04:43.499416 systemd-networkd[1859]: eth0: Gained IPv6LL Feb 13 16:04:43.504136 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:04:43.511731 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:04:43.529119 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 16:04:43.545461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:04:43.554307 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:04:43.557785 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:04:43.571778 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:04:43.580970 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:04:43.583683 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:04:43.655994 amazon-ssm-agent[2212]: Initializing new seelog logger Feb 13 16:04:43.658009 amazon-ssm-agent[2212]: New Seelog Logger Creation Complete Feb 13 16:04:43.658009 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.658009 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.658009 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.659352 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.659488 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.659762 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.660128 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.660246 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.660447 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.661421 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO Proxy environment variables: Feb 13 16:04:43.666796 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.666796 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.666982 amazon-ssm-agent[2212]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.673291 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:04:43.762269 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO no_proxy: Feb 13 16:04:43.790187 sshd[2206]: Accepted publickey for core from 139.178.68.195 port 50738 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:43.797006 sshd[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:43.821155 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:04:43.833139 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:04:43.844279 systemd-logind[2001]: New session 1 of user core. Feb 13 16:04:43.860490 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO https_proxy: Feb 13 16:04:43.886245 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:04:43.906959 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:04:43.931378 (systemd)[2234]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:04:43.960229 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO http_proxy: Feb 13 16:04:44.059805 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO Checking if agent identity type OnPrem can be assumed Feb 13 16:04:44.157434 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO Checking if agent identity type EC2 can be assumed Feb 13 16:04:44.194753 systemd[2234]: Queued start job for default target default.target. Feb 13 16:04:44.201914 systemd[2234]: Created slice app.slice - User Application Slice. Feb 13 16:04:44.201971 systemd[2234]: Reached target paths.target - Paths. Feb 13 16:04:44.202004 systemd[2234]: Reached target timers.target - Timers. Feb 13 16:04:44.207434 systemd[2234]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:04:44.235155 systemd[2234]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:04:44.235434 systemd[2234]: Reached target sockets.target - Sockets. Feb 13 16:04:44.235467 systemd[2234]: Reached target basic.target - Basic System. Feb 13 16:04:44.235591 systemd[2234]: Reached target default.target - Main User Target. Feb 13 16:04:44.235669 systemd[2234]: Startup finished in 284ms. Feb 13 16:04:44.236343 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:04:44.252599 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:04:44.258422 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO Agent will take identity from EC2 Feb 13 16:04:44.357787 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:44.425930 systemd[1]: Started sshd@1-172.31.31.154:22-139.178.68.195:50746.service - OpenSSH per-connection server daemon (139.178.68.195:50746). Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [Registrar] Starting registrar module Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:43 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:44 INFO [EC2Identity] EC2 registration was successful. Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:44 INFO [CredentialRefresher] credentialRefresher has started Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:44 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 16:04:44.434351 amazon-ssm-agent[2212]: 2025-02-13 16:04:44 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 16:04:44.457480 amazon-ssm-agent[2212]: 2025-02-13 16:04:44 INFO [CredentialRefresher] Next credential rotation will be in 30.158321553633332 minutes Feb 13 16:04:44.611628 sshd[2248]: Accepted publickey for core from 139.178.68.195 port 50746 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:44.614478 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:44.625159 systemd-logind[2001]: New session 2 of user core. Feb 13 16:04:44.631649 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:04:44.760017 sshd[2248]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:44.767642 systemd-logind[2001]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:04:44.768259 systemd[1]: sshd@1-172.31.31.154:22-139.178.68.195:50746.service: Deactivated successfully. Feb 13 16:04:44.772931 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:04:44.777628 systemd-logind[2001]: Removed session 2. Feb 13 16:04:44.805035 systemd[1]: Started sshd@2-172.31.31.154:22-139.178.68.195:50750.service - OpenSSH per-connection server daemon (139.178.68.195:50750). Feb 13 16:04:44.991473 sshd[2255]: Accepted publickey for core from 139.178.68.195 port 50750 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:44.994490 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:45.005437 systemd-logind[2001]: New session 3 of user core. Feb 13 16:04:45.010731 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:04:45.143951 sshd[2255]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:45.152556 systemd[1]: sshd@2-172.31.31.154:22-139.178.68.195:50750.service: Deactivated successfully. Feb 13 16:04:45.156746 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:04:45.159691 systemd-logind[2001]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:04:45.163253 systemd-logind[2001]: Removed session 3. Feb 13 16:04:45.471654 amazon-ssm-agent[2212]: 2025-02-13 16:04:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 16:04:45.572006 amazon-ssm-agent[2212]: 2025-02-13 16:04:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Feb 13 16:04:45.674434 amazon-ssm-agent[2212]: 2025-02-13 16:04:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 16:04:45.718535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:04:45.720138 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:04:45.723188 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:04:45.726302 systemd[1]: Startup finished in 1.247s (kernel) + 8.519s (initrd) + 9.451s (userspace) = 19.219s. Feb 13 16:04:46.008374 ntpd[1993]: Listen normally on 7 eth0 [fe80::402:5bff:feb2:2b51%2]:123 Feb 13 16:04:46.009046 ntpd[1993]: 13 Feb 16:04:46 ntpd[1993]: Listen normally on 7 eth0 [fe80::402:5bff:feb2:2b51%2]:123 Feb 13 16:04:46.851554 kubelet[2276]: E0213 16:04:46.851457 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:04:46.856650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:04:46.856996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:04:46.858335 systemd[1]: kubelet.service: Consumed 1.260s CPU time. Feb 13 16:04:48.818318 systemd-resolved[1884]: Clock change detected. Flushing caches. Feb 13 16:04:54.997619 systemd[1]: Started sshd@3-172.31.31.154:22-139.178.68.195:42232.service - OpenSSH per-connection server daemon (139.178.68.195:42232). Feb 13 16:04:55.164576 sshd[2290]: Accepted publickey for core from 139.178.68.195 port 42232 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:55.167201 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:55.175515 systemd-logind[2001]: New session 4 of user core. Feb 13 16:04:55.182248 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:04:55.311545 sshd[2290]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:55.320326 systemd-logind[2001]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:04:55.321357 systemd[1]: sshd@3-172.31.31.154:22-139.178.68.195:42232.service: Deactivated successfully. Feb 13 16:04:55.325624 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:04:55.327322 systemd-logind[2001]: Removed session 4. Feb 13 16:04:55.350603 systemd[1]: Started sshd@4-172.31.31.154:22-139.178.68.195:42248.service - OpenSSH per-connection server daemon (139.178.68.195:42248). Feb 13 16:04:55.528853 sshd[2297]: Accepted publickey for core from 139.178.68.195 port 42248 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:55.531456 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:55.540378 systemd-logind[2001]: New session 5 of user core. Feb 13 16:04:55.550315 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:04:55.670185 sshd[2297]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:55.677975 systemd[1]: sshd@4-172.31.31.154:22-139.178.68.195:42248.service: Deactivated successfully. Feb 13 16:04:55.681924 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:04:55.684166 systemd-logind[2001]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:04:55.686196 systemd-logind[2001]: Removed session 5. Feb 13 16:04:55.715487 systemd[1]: Started sshd@5-172.31.31.154:22-139.178.68.195:42260.service - OpenSSH per-connection server daemon (139.178.68.195:42260). Feb 13 16:04:55.879667 sshd[2304]: Accepted publickey for core from 139.178.68.195 port 42260 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:55.882612 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:55.891488 systemd-logind[2001]: New session 6 of user core. Feb 13 16:04:55.899349 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:04:56.027410 sshd[2304]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:56.033333 systemd-logind[2001]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:04:56.034843 systemd[1]: sshd@5-172.31.31.154:22-139.178.68.195:42260.service: Deactivated successfully. Feb 13 16:04:56.038733 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:04:56.040501 systemd-logind[2001]: Removed session 6. Feb 13 16:04:56.065404 systemd[1]: Started sshd@6-172.31.31.154:22-139.178.68.195:42262.service - OpenSSH per-connection server daemon (139.178.68.195:42262). Feb 13 16:04:56.247706 sshd[2311]: Accepted publickey for core from 139.178.68.195 port 42262 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:56.250426 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:56.258092 systemd-logind[2001]: New session 7 of user core. Feb 13 16:04:56.266274 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:04:56.402598 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:04:56.403660 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:56.423233 sudo[2314]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:56.447194 sshd[2311]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:56.453098 systemd-logind[2001]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:04:56.454645 systemd[1]: sshd@6-172.31.31.154:22-139.178.68.195:42262.service: Deactivated successfully. Feb 13 16:04:56.458623 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:04:56.462212 systemd-logind[2001]: Removed session 7. Feb 13 16:04:56.489779 systemd[1]: Started sshd@7-172.31.31.154:22-139.178.68.195:49746.service - OpenSSH per-connection server daemon (139.178.68.195:49746). Feb 13 16:04:56.656709 sshd[2319]: Accepted publickey for core from 139.178.68.195 port 49746 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:56.659687 sshd[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:56.669449 systemd-logind[2001]: New session 8 of user core. Feb 13 16:04:56.670428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:04:56.678270 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:04:56.682265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:04:56.786172 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:04:56.786794 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:56.796491 sudo[2326]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:56.809238 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 16:04:56.809905 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:56.842648 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:56.848888 auditctl[2329]: No rules Feb 13 16:04:56.850581 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:04:56.852195 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:56.861808 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:56.928650 augenrules[2347]: No rules Feb 13 16:04:56.932346 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:56.934228 sudo[2325]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:56.961311 sshd[2319]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:56.968313 systemd-logind[2001]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:04:56.969906 systemd[1]: sshd@7-172.31.31.154:22-139.178.68.195:49746.service: Deactivated successfully. Feb 13 16:04:56.975566 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:04:56.999195 systemd-logind[2001]: Removed session 8. Feb 13 16:04:57.005897 systemd[1]: Started sshd@8-172.31.31.154:22-139.178.68.195:49762.service - OpenSSH per-connection server daemon (139.178.68.195:49762). Feb 13 16:04:57.048303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:04:57.066475 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:04:57.148752 kubelet[2362]: E0213 16:04:57.148574 2362 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:04:57.155931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:04:57.156322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:04:57.177143 sshd[2355]: Accepted publickey for core from 139.178.68.195 port 49762 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:57.179733 sshd[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:57.188729 systemd-logind[2001]: New session 9 of user core. Feb 13 16:04:57.199223 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:04:57.303667 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:04:57.304354 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:57.862492 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:04:57.864929 (dockerd)[2387]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:04:58.364600 dockerd[2387]: time="2025-02-13T16:04:58.364499991Z" level=info msg="Starting up" Feb 13 16:04:58.559475 dockerd[2387]: time="2025-02-13T16:04:58.559414444Z" level=info msg="Loading containers: start." Feb 13 16:04:58.763034 kernel: Initializing XFRM netlink socket Feb 13 16:04:58.842664 (udev-worker)[2410]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:58.936414 systemd-networkd[1859]: docker0: Link UP Feb 13 16:04:58.963658 dockerd[2387]: time="2025-02-13T16:04:58.963588750Z" level=info msg="Loading containers: done." Feb 13 16:04:58.994889 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck573319581-merged.mount: Deactivated successfully. Feb 13 16:04:59.000380 dockerd[2387]: time="2025-02-13T16:04:59.000309290Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:04:59.000616 dockerd[2387]: time="2025-02-13T16:04:59.000455594Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 16:04:59.000733 dockerd[2387]: time="2025-02-13T16:04:59.000643790Z" level=info msg="Daemon has completed initialization" Feb 13 16:04:59.057806 dockerd[2387]: time="2025-02-13T16:04:59.056792318Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:04:59.057149 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:05:00.075183 containerd[2027]: time="2025-02-13T16:05:00.074803443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 16:05:00.728432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497122465.mount: Deactivated successfully. Feb 13 16:05:03.265901 containerd[2027]: time="2025-02-13T16:05:03.265829563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:03.268095 containerd[2027]: time="2025-02-13T16:05:03.268042975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 16:05:03.270781 containerd[2027]: time="2025-02-13T16:05:03.270655843Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:03.277425 containerd[2027]: time="2025-02-13T16:05:03.277270099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:03.280012 containerd[2027]: time="2025-02-13T16:05:03.279643219Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 3.204770164s" Feb 13 16:05:03.280012 containerd[2027]: time="2025-02-13T16:05:03.279725383Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 16:05:03.281448 containerd[2027]: time="2025-02-13T16:05:03.281109847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 16:05:05.548675 containerd[2027]: time="2025-02-13T16:05:05.548585363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:05.550172 containerd[2027]: time="2025-02-13T16:05:05.550083239Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 16:05:05.552211 containerd[2027]: time="2025-02-13T16:05:05.552071207Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:05.560294 containerd[2027]: time="2025-02-13T16:05:05.560222375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:05.567078 containerd[2027]: time="2025-02-13T16:05:05.566744963Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.285547264s" Feb 13 16:05:05.567078 containerd[2027]: time="2025-02-13T16:05:05.566825531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 16:05:05.570216 containerd[2027]: time="2025-02-13T16:05:05.569667647Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 16:05:07.224216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 16:05:07.231436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:07.619416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:07.625461 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:07.718737 kubelet[2595]: E0213 16:05:07.718657 2595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:07.724455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:07.724805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:08.042833 containerd[2027]: time="2025-02-13T16:05:08.042767771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:08.044952 containerd[2027]: time="2025-02-13T16:05:08.044889239Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 16:05:08.046113 containerd[2027]: time="2025-02-13T16:05:08.045968387Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:08.053743 containerd[2027]: time="2025-02-13T16:05:08.053603579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:08.056186 containerd[2027]: time="2025-02-13T16:05:08.056111207Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 2.486335176s" Feb 13 16:05:08.056498 containerd[2027]: time="2025-02-13T16:05:08.056333759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 16:05:08.057368 containerd[2027]: time="2025-02-13T16:05:08.057161711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 16:05:09.633439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250245290.mount: Deactivated successfully. Feb 13 16:05:10.157549 containerd[2027]: time="2025-02-13T16:05:10.157448293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:10.159066 containerd[2027]: time="2025-02-13T16:05:10.158980646Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 16:05:10.160575 containerd[2027]: time="2025-02-13T16:05:10.160485482Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:10.164263 containerd[2027]: time="2025-02-13T16:05:10.164166218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:10.165769 containerd[2027]: time="2025-02-13T16:05:10.165574874Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 2.108336255s" Feb 13 16:05:10.165769 containerd[2027]: time="2025-02-13T16:05:10.165628778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 16:05:10.166601 containerd[2027]: time="2025-02-13T16:05:10.166558778Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:05:10.798546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208684388.mount: Deactivated successfully. Feb 13 16:05:11.822661 containerd[2027]: time="2025-02-13T16:05:11.822452238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.823759 containerd[2027]: time="2025-02-13T16:05:11.823702290Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 16:05:11.825535 containerd[2027]: time="2025-02-13T16:05:11.825441630Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.832049 containerd[2027]: time="2025-02-13T16:05:11.831821742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.834438 containerd[2027]: time="2025-02-13T16:05:11.834223350Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.667420672s" Feb 13 16:05:11.834438 containerd[2027]: time="2025-02-13T16:05:11.834287034Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 16:05:11.835298 containerd[2027]: time="2025-02-13T16:05:11.835256046Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 16:05:12.375668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount459493412.mount: Deactivated successfully. Feb 13 16:05:12.383470 containerd[2027]: time="2025-02-13T16:05:12.383350529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:12.385662 containerd[2027]: time="2025-02-13T16:05:12.385550453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 16:05:12.387075 containerd[2027]: time="2025-02-13T16:05:12.386968685Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:12.391183 containerd[2027]: time="2025-02-13T16:05:12.391121309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:12.393315 containerd[2027]: time="2025-02-13T16:05:12.393119573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 557.806983ms" Feb 13 16:05:12.393315 containerd[2027]: time="2025-02-13T16:05:12.393169565Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 16:05:12.394163 containerd[2027]: time="2025-02-13T16:05:12.394118969Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 16:05:12.501834 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 16:05:12.974756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952154381.mount: Deactivated successfully. Feb 13 16:05:17.974525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 16:05:17.989286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:18.171038 containerd[2027]: time="2025-02-13T16:05:18.169649373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:18.180627 containerd[2027]: time="2025-02-13T16:05:18.180556833Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 16:05:18.183248 containerd[2027]: time="2025-02-13T16:05:18.183141153Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:18.198022 containerd[2027]: time="2025-02-13T16:05:18.195844941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:18.199411 containerd[2027]: time="2025-02-13T16:05:18.199356825Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.805182656s" Feb 13 16:05:18.199584 containerd[2027]: time="2025-02-13T16:05:18.199548897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 16:05:18.376370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:18.381024 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:18.453559 kubelet[2732]: E0213 16:05:18.453462 2732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:18.458390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:18.459092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:24.089469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:24.104493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:24.174625 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Feb 13 16:05:24.174871 systemd[1]: Reloading... Feb 13 16:05:24.433036 zram_generator::config[2804]: No configuration found. Feb 13 16:05:24.655579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:24.826103 systemd[1]: Reloading finished in 650 ms. Feb 13 16:05:24.926628 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:05:24.926855 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:05:24.927466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:24.935639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:25.223924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:25.239538 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:05:25.310255 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:25.310255 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:05:25.310255 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:25.310905 kubelet[2862]: I0213 16:05:25.310431 2862 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:05:26.812368 kubelet[2862]: I0213 16:05:26.812296 2862 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 16:05:26.812368 kubelet[2862]: I0213 16:05:26.812353 2862 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:05:26.813474 kubelet[2862]: I0213 16:05:26.813174 2862 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 16:05:26.858769 kubelet[2862]: I0213 16:05:26.857909 2862 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:05:26.859580 kubelet[2862]: E0213 16:05:26.859492 2862 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:26.877114 kubelet[2862]: E0213 16:05:26.877040 2862 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 16:05:26.877411 kubelet[2862]: I0213 16:05:26.877388 2862 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 16:05:26.884493 kubelet[2862]: I0213 16:05:26.884451 2862 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:05:26.885124 kubelet[2862]: I0213 16:05:26.884974 2862 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 16:05:26.886051 kubelet[2862]: I0213 16:05:26.885459 2862 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:05:26.886051 kubelet[2862]: I0213 16:05:26.885512 2862 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-154","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 16:05:26.886051 kubelet[2862]: I0213 16:05:26.885884 2862 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:05:26.886051 kubelet[2862]: I0213 16:05:26.885902 2862 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 16:05:26.886405 kubelet[2862]: I0213 16:05:26.886117 2862 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:26.890110 kubelet[2862]: I0213 16:05:26.889549 2862 kubelet.go:408] "Attempting to sync node with API server" Feb 13 16:05:26.890110 kubelet[2862]: I0213 16:05:26.889599 2862 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:05:26.890110 kubelet[2862]: I0213 16:05:26.889646 2862 kubelet.go:314] "Adding apiserver pod source" Feb 13 16:05:26.890110 kubelet[2862]: I0213 16:05:26.889667 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:05:26.894299 kubelet[2862]: W0213 16:05:26.893191 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-154&limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:26.894299 kubelet[2862]: E0213 16:05:26.893297 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-154&limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:26.894299 kubelet[2862]: W0213 16:05:26.893908 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:26.894299 kubelet[2862]: E0213 16:05:26.893978 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:26.894578 kubelet[2862]: I0213 16:05:26.894485 2862 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:05:26.897560 kubelet[2862]: I0213 16:05:26.897497 2862 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:05:26.898783 kubelet[2862]: W0213 16:05:26.898743 2862 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:05:26.901815 kubelet[2862]: I0213 16:05:26.900479 2862 server.go:1269] "Started kubelet" Feb 13 16:05:26.903678 kubelet[2862]: I0213 16:05:26.903617 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:05:26.911216 kubelet[2862]: I0213 16:05:26.911129 2862 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:05:26.914882 kubelet[2862]: I0213 16:05:26.913244 2862 server.go:460] "Adding debug handlers to kubelet server" Feb 13 16:05:26.915338 kubelet[2862]: I0213 16:05:26.915304 2862 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 16:05:26.916083 kubelet[2862]: E0213 16:05:26.916048 2862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-154\" not found" Feb 13 16:05:26.916477 kubelet[2862]: I0213 16:05:26.914840 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:05:26.922492 kubelet[2862]: I0213 16:05:26.922437 2862 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:05:26.924028 kubelet[2862]: I0213 16:05:26.918824 2862 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:05:26.924028 kubelet[2862]: I0213 16:05:26.918734 2862 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 16:05:26.924388 kubelet[2862]: I0213 16:05:26.923734 2862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 16:05:26.925961 kubelet[2862]: E0213 16:05:26.922969 2862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.154:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.154:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-154.1823d02250eeddc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-154,UID:ip-172-31-31-154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-154,},FirstTimestamp:2025-02-13 16:05:26.900440517 +0000 UTC m=+1.654093726,LastTimestamp:2025-02-13 16:05:26.900440517 +0000 UTC m=+1.654093726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-154,}" Feb 13 16:05:26.926433 kubelet[2862]: I0213 16:05:26.926393 2862 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:05:26.926633 kubelet[2862]: I0213 16:05:26.926582 2862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:05:26.928647 kubelet[2862]: W0213 16:05:26.928561 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:26.928805 kubelet[2862]: E0213 16:05:26.928658 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:26.928863 kubelet[2862]: E0213 16:05:26.928811 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-154?timeout=10s\": dial tcp 172.31.31.154:6443: connect: connection refused" interval="200ms" Feb 13 16:05:26.930678 kubelet[2862]: E0213 16:05:26.930608 2862 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:05:26.931900 kubelet[2862]: I0213 16:05:26.931828 2862 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:05:26.969272 kubelet[2862]: I0213 16:05:26.969200 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:05:26.971773 kubelet[2862]: I0213 16:05:26.971725 2862 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:05:26.971773 kubelet[2862]: I0213 16:05:26.971760 2862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:05:26.972015 kubelet[2862]: I0213 16:05:26.971794 2862 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:26.972965 kubelet[2862]: I0213 16:05:26.972301 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:05:26.972965 kubelet[2862]: I0213 16:05:26.972369 2862 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:05:26.972965 kubelet[2862]: I0213 16:05:26.972410 2862 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 16:05:26.972965 kubelet[2862]: E0213 16:05:26.972493 2862 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:05:26.975590 kubelet[2862]: I0213 16:05:26.975421 2862 policy_none.go:49] "None policy: Start" Feb 13 16:05:26.982052 kubelet[2862]: W0213 16:05:26.981146 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:26.982052 kubelet[2862]: E0213 16:05:26.981231 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:26.983494 kubelet[2862]: I0213 16:05:26.983458 2862 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:05:26.983668 kubelet[2862]: I0213 16:05:26.983647 2862 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:05:26.995139 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:05:27.012017 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:05:27.016621 kubelet[2862]: E0213 16:05:27.016553 2862 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-154\" not found" Feb 13 16:05:27.020098 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:05:27.028637 kubelet[2862]: I0213 16:05:27.028598 2862 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:05:27.029929 kubelet[2862]: I0213 16:05:27.029117 2862 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 16:05:27.029929 kubelet[2862]: I0213 16:05:27.029149 2862 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:05:27.029929 kubelet[2862]: I0213 16:05:27.029627 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:05:27.032468 kubelet[2862]: E0213 16:05:27.032416 2862 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-154\" not found" Feb 13 16:05:27.091445 systemd[1]: Created slice kubepods-burstable-pod6dffdb9b3f9c5e1d8bebcb95b29482f9.slice - libcontainer container kubepods-burstable-pod6dffdb9b3f9c5e1d8bebcb95b29482f9.slice. Feb 13 16:05:27.111125 systemd[1]: Created slice kubepods-burstable-pod8d65d00bbf16f0f663e86ccc8c402feb.slice - libcontainer container kubepods-burstable-pod8d65d00bbf16f0f663e86ccc8c402feb.slice. Feb 13 16:05:27.127091 kubelet[2862]: I0213 16:05:27.126823 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:27.127091 kubelet[2862]: I0213 16:05:27.126892 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dffdb9b3f9c5e1d8bebcb95b29482f9-ca-certs\") pod \"kube-apiserver-ip-172-31-31-154\" (UID: \"6dffdb9b3f9c5e1d8bebcb95b29482f9\") " pod="kube-system/kube-apiserver-ip-172-31-31-154" Feb 13 16:05:27.127091 kubelet[2862]: I0213 16:05:27.126932 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dffdb9b3f9c5e1d8bebcb95b29482f9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-154\" (UID: \"6dffdb9b3f9c5e1d8bebcb95b29482f9\") " pod="kube-system/kube-apiserver-ip-172-31-31-154" Feb 13 16:05:27.127091 kubelet[2862]: I0213 16:05:27.126974 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:27.127091 kubelet[2862]: I0213 16:05:27.127047 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:27.127603 kubelet[2862]: I0213 16:05:27.127088 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:27.127603 kubelet[2862]: I0213 16:05:27.127136 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:27.127603 kubelet[2862]: I0213 16:05:27.127175 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d70c9b3e7bfe38f528c4b3efd0202219-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-154\" (UID: \"d70c9b3e7bfe38f528c4b3efd0202219\") " pod="kube-system/kube-scheduler-ip-172-31-31-154" Feb 13 16:05:27.127603 kubelet[2862]: I0213 16:05:27.127210 2862 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dffdb9b3f9c5e1d8bebcb95b29482f9-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-154\" (UID: \"6dffdb9b3f9c5e1d8bebcb95b29482f9\") " pod="kube-system/kube-apiserver-ip-172-31-31-154" Feb 13 16:05:27.129197 systemd[1]: Created slice kubepods-burstable-podd70c9b3e7bfe38f528c4b3efd0202219.slice - libcontainer container kubepods-burstable-podd70c9b3e7bfe38f528c4b3efd0202219.slice. Feb 13 16:05:27.131052 kubelet[2862]: E0213 16:05:27.130737 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-154?timeout=10s\": dial tcp 172.31.31.154:6443: connect: connection refused" interval="400ms" Feb 13 16:05:27.132452 kubelet[2862]: I0213 16:05:27.132416 2862 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-154" Feb 13 16:05:27.133794 kubelet[2862]: E0213 16:05:27.133746 2862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.154:6443/api/v1/nodes\": dial tcp 172.31.31.154:6443: connect: connection refused" node="ip-172-31-31-154" Feb 13 16:05:27.318064 update_engine[2004]: I20250213 16:05:27.317528 2004 update_attempter.cc:509] Updating boot flags... Feb 13 16:05:27.338881 kubelet[2862]: I0213 16:05:27.338842 2862 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-154" Feb 13 16:05:27.341504 kubelet[2862]: E0213 16:05:27.341421 2862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.154:6443/api/v1/nodes\": dial tcp 172.31.31.154:6443: connect: connection refused" node="ip-172-31-31-154" Feb 13 16:05:27.390196 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2906) Feb 13 16:05:27.406824 containerd[2027]: time="2025-02-13T16:05:27.406768051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-154,Uid:6dffdb9b3f9c5e1d8bebcb95b29482f9,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:27.418282 containerd[2027]: time="2025-02-13T16:05:27.416743219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-154,Uid:8d65d00bbf16f0f663e86ccc8c402feb,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:27.436576 containerd[2027]: time="2025-02-13T16:05:27.436524055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-154,Uid:d70c9b3e7bfe38f528c4b3efd0202219,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:27.534355 kubelet[2862]: E0213 16:05:27.534236 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-154?timeout=10s\": dial tcp 172.31.31.154:6443: connect: connection refused" interval="800ms" Feb 13 16:05:27.700061 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2908) Feb 13 16:05:27.745564 kubelet[2862]: I0213 16:05:27.745525 2862 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-154" Feb 13 16:05:27.746559 kubelet[2862]: E0213 16:05:27.746417 2862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.154:6443/api/v1/nodes\": dial tcp 172.31.31.154:6443: connect: connection refused" node="ip-172-31-31-154" Feb 13 16:05:28.048033 kubelet[2862]: W0213 16:05:28.047664 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-154&limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:28.049159 kubelet[2862]: E0213 16:05:28.048779 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-154&limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:28.053116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911591914.mount: Deactivated successfully. Feb 13 16:05:28.071663 containerd[2027]: time="2025-02-13T16:05:28.071578866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:28.073923 containerd[2027]: time="2025-02-13T16:05:28.073846866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:28.075916 containerd[2027]: time="2025-02-13T16:05:28.075814422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 16:05:28.078459 containerd[2027]: time="2025-02-13T16:05:28.078399931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:05:28.080238 containerd[2027]: time="2025-02-13T16:05:28.080170555Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:28.083131 containerd[2027]: time="2025-02-13T16:05:28.082939771Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:28.084817 containerd[2027]: time="2025-02-13T16:05:28.084718855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:05:28.089247 containerd[2027]: time="2025-02-13T16:05:28.089157019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:28.094025 containerd[2027]: time="2025-02-13T16:05:28.093227575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 676.37416ms" Feb 13 16:05:28.098640 containerd[2027]: time="2025-02-13T16:05:28.098237131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 690.539788ms" Feb 13 16:05:28.102502 containerd[2027]: time="2025-02-13T16:05:28.102159823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 665.348164ms" Feb 13 16:05:28.290560 containerd[2027]: time="2025-02-13T16:05:28.290400044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:28.290560 containerd[2027]: time="2025-02-13T16:05:28.290492540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:28.291622 containerd[2027]: time="2025-02-13T16:05:28.291067196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.291898 containerd[2027]: time="2025-02-13T16:05:28.291309020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.305921 containerd[2027]: time="2025-02-13T16:05:28.300352016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:28.305921 containerd[2027]: time="2025-02-13T16:05:28.300451328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:28.305921 containerd[2027]: time="2025-02-13T16:05:28.300501104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.305921 containerd[2027]: time="2025-02-13T16:05:28.300680960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.315301 containerd[2027]: time="2025-02-13T16:05:28.309629840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:28.315301 containerd[2027]: time="2025-02-13T16:05:28.309737432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:28.315301 containerd[2027]: time="2025-02-13T16:05:28.309767768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.315301 containerd[2027]: time="2025-02-13T16:05:28.309922664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.335724 kubelet[2862]: E0213 16:05:28.335637 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-154?timeout=10s\": dial tcp 172.31.31.154:6443: connect: connection refused" interval="1.6s" Feb 13 16:05:28.337521 systemd[1]: Started cri-containerd-0480a393eeace00d36e1bce918fc637c738610376f0222a354de85102c61d68e.scope - libcontainer container 0480a393eeace00d36e1bce918fc637c738610376f0222a354de85102c61d68e. Feb 13 16:05:28.347903 kubelet[2862]: W0213 16:05:28.347849 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:28.348934 kubelet[2862]: E0213 16:05:28.348705 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:28.375419 systemd[1]: Started cri-containerd-16d0378269b47b4a99eae2df83c5b68d22102d768cb351f59f7bb188fd8fa28a.scope - libcontainer container 16d0378269b47b4a99eae2df83c5b68d22102d768cb351f59f7bb188fd8fa28a. Feb 13 16:05:28.380253 systemd[1]: Started cri-containerd-60db5c9cf74c48a4bc1ab67e36ccf95cbd4f1889720255591be3a23120edd7f4.scope - libcontainer container 60db5c9cf74c48a4bc1ab67e36ccf95cbd4f1889720255591be3a23120edd7f4. Feb 13 16:05:28.402260 kubelet[2862]: W0213 16:05:28.402113 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:28.402260 kubelet[2862]: E0213 16:05:28.402240 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:28.462381 containerd[2027]: time="2025-02-13T16:05:28.461977580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-154,Uid:6dffdb9b3f9c5e1d8bebcb95b29482f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0480a393eeace00d36e1bce918fc637c738610376f0222a354de85102c61d68e\"" Feb 13 16:05:28.479108 containerd[2027]: time="2025-02-13T16:05:28.477302588Z" level=info msg="CreateContainer within sandbox \"0480a393eeace00d36e1bce918fc637c738610376f0222a354de85102c61d68e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:05:28.509789 kubelet[2862]: W0213 16:05:28.509672 2862 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.154:6443: connect: connection refused Feb 13 16:05:28.510013 kubelet[2862]: E0213 16:05:28.509799 2862 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.154:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:28.513514 containerd[2027]: time="2025-02-13T16:05:28.513313509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-154,Uid:8d65d00bbf16f0f663e86ccc8c402feb,Namespace:kube-system,Attempt:0,} returns sandbox id \"16d0378269b47b4a99eae2df83c5b68d22102d768cb351f59f7bb188fd8fa28a\"" Feb 13 16:05:28.519843 containerd[2027]: time="2025-02-13T16:05:28.519627621Z" level=info msg="CreateContainer within sandbox \"16d0378269b47b4a99eae2df83c5b68d22102d768cb351f59f7bb188fd8fa28a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:05:28.528179 containerd[2027]: time="2025-02-13T16:05:28.528100701Z" level=info msg="CreateContainer within sandbox \"0480a393eeace00d36e1bce918fc637c738610376f0222a354de85102c61d68e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07f7ada7e11099142849dd00e2e2401893dd54f782b7ae158f1893fec3e37a4e\"" Feb 13 16:05:28.529490 containerd[2027]: time="2025-02-13T16:05:28.529429989Z" level=info msg="StartContainer for \"07f7ada7e11099142849dd00e2e2401893dd54f782b7ae158f1893fec3e37a4e\"" Feb 13 16:05:28.530048 containerd[2027]: time="2025-02-13T16:05:28.529862961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-154,Uid:d70c9b3e7bfe38f528c4b3efd0202219,Namespace:kube-system,Attempt:0,} returns sandbox id \"60db5c9cf74c48a4bc1ab67e36ccf95cbd4f1889720255591be3a23120edd7f4\"" Feb 13 16:05:28.544357 containerd[2027]: time="2025-02-13T16:05:28.543193125Z" level=info msg="CreateContainer within sandbox \"60db5c9cf74c48a4bc1ab67e36ccf95cbd4f1889720255591be3a23120edd7f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:05:28.551521 kubelet[2862]: I0213 16:05:28.551481 2862 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-154" Feb 13 16:05:28.552579 kubelet[2862]: E0213 16:05:28.552483 2862 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.154:6443/api/v1/nodes\": dial tcp 172.31.31.154:6443: connect: connection refused" node="ip-172-31-31-154" Feb 13 16:05:28.576658 containerd[2027]: time="2025-02-13T16:05:28.575436261Z" level=info msg="CreateContainer within sandbox \"16d0378269b47b4a99eae2df83c5b68d22102d768cb351f59f7bb188fd8fa28a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b\"" Feb 13 16:05:28.580078 containerd[2027]: time="2025-02-13T16:05:28.579985461Z" level=info msg="StartContainer for \"3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b\"" Feb 13 16:05:28.602395 containerd[2027]: time="2025-02-13T16:05:28.602338317Z" level=info msg="CreateContainer within sandbox \"60db5c9cf74c48a4bc1ab67e36ccf95cbd4f1889720255591be3a23120edd7f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9\"" Feb 13 16:05:28.603610 containerd[2027]: time="2025-02-13T16:05:28.603548157Z" level=info msg="StartContainer for \"62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9\"" Feb 13 16:05:28.607912 systemd[1]: Started cri-containerd-07f7ada7e11099142849dd00e2e2401893dd54f782b7ae158f1893fec3e37a4e.scope - libcontainer container 07f7ada7e11099142849dd00e2e2401893dd54f782b7ae158f1893fec3e37a4e. Feb 13 16:05:28.669505 systemd[1]: Started cri-containerd-3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b.scope - libcontainer container 3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b. Feb 13 16:05:28.694450 systemd[1]: Started cri-containerd-62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9.scope - libcontainer container 62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9. Feb 13 16:05:28.748376 containerd[2027]: time="2025-02-13T16:05:28.747217858Z" level=info msg="StartContainer for \"07f7ada7e11099142849dd00e2e2401893dd54f782b7ae158f1893fec3e37a4e\" returns successfully" Feb 13 16:05:28.816494 containerd[2027]: time="2025-02-13T16:05:28.816360262Z" level=info msg="StartContainer for \"3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b\" returns successfully" Feb 13 16:05:28.879060 containerd[2027]: time="2025-02-13T16:05:28.876979498Z" level=info msg="StartContainer for \"62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9\" returns successfully" Feb 13 16:05:30.156180 kubelet[2862]: I0213 16:05:30.156131 2862 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-154" Feb 13 16:05:33.189156 kubelet[2862]: E0213 16:05:33.189070 2862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-154\" not found" node="ip-172-31-31-154" Feb 13 16:05:33.296408 kubelet[2862]: I0213 16:05:33.296344 2862 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-154" Feb 13 16:05:33.377106 kubelet[2862]: E0213 16:05:33.376778 2862 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-154.1823d02250eeddc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-154,UID:ip-172-31-31-154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-154,},FirstTimestamp:2025-02-13 16:05:26.900440517 +0000 UTC m=+1.654093726,LastTimestamp:2025-02-13 16:05:26.900440517 +0000 UTC m=+1.654093726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-154,}" Feb 13 16:05:33.440895 kubelet[2862]: E0213 16:05:33.440034 2862 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-154.1823d02252bace25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-154,UID:ip-172-31-31-154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-31-154,},FirstTimestamp:2025-02-13 16:05:26.930583077 +0000 UTC m=+1.684236274,LastTimestamp:2025-02-13 16:05:26.930583077 +0000 UTC m=+1.684236274,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-154,}" Feb 13 16:05:33.522581 kubelet[2862]: E0213 16:05:33.522306 2862 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-154.1823d0225517bfd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-154,UID:ip-172-31-31-154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-31-154 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-31-154,},FirstTimestamp:2025-02-13 16:05:26.970228689 +0000 UTC m=+1.723881886,LastTimestamp:2025-02-13 16:05:26.970228689 +0000 UTC m=+1.723881886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-154,}" Feb 13 16:05:33.905656 kubelet[2862]: I0213 16:05:33.905335 2862 apiserver.go:52] "Watching apiserver" Feb 13 16:05:33.925559 kubelet[2862]: I0213 16:05:33.925503 2862 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 16:05:35.217431 systemd[1]: Reloading requested from client PID 3318 ('systemctl') (unit session-9.scope)... Feb 13 16:05:35.217461 systemd[1]: Reloading... Feb 13 16:05:35.414076 zram_generator::config[3361]: No configuration found. Feb 13 16:05:35.645921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:35.849788 systemd[1]: Reloading finished in 631 ms. Feb 13 16:05:35.928838 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:35.945403 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:05:35.946086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:35.946170 systemd[1]: kubelet.service: Consumed 2.423s CPU time, 119.1M memory peak, 0B memory swap peak. Feb 13 16:05:35.961380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:36.246047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:36.264607 (kubelet)[3418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:05:36.379028 kubelet[3418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:36.379028 kubelet[3418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:05:36.379028 kubelet[3418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:36.379028 kubelet[3418]: I0213 16:05:36.378121 3418 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:05:36.394758 kubelet[3418]: I0213 16:05:36.394255 3418 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 16:05:36.394758 kubelet[3418]: I0213 16:05:36.394316 3418 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:05:36.395469 kubelet[3418]: I0213 16:05:36.395431 3418 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 16:05:36.399844 kubelet[3418]: I0213 16:05:36.398652 3418 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:05:36.403743 kubelet[3418]: I0213 16:05:36.403484 3418 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:05:36.413222 kubelet[3418]: E0213 16:05:36.413161 3418 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 16:05:36.413450 kubelet[3418]: I0213 16:05:36.413409 3418 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 16:05:36.420586 kubelet[3418]: I0213 16:05:36.420481 3418 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:05:36.421981 kubelet[3418]: I0213 16:05:36.420876 3418 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 16:05:36.421981 kubelet[3418]: I0213 16:05:36.421234 3418 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:05:36.421981 kubelet[3418]: I0213 16:05:36.421279 3418 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-154","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 16:05:36.421981 kubelet[3418]: I0213 16:05:36.421559 3418 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:05:36.422449 kubelet[3418]: I0213 16:05:36.421581 3418 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 16:05:36.422449 kubelet[3418]: I0213 16:05:36.421635 3418 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:36.422449 kubelet[3418]: I0213 16:05:36.421833 3418 kubelet.go:408] "Attempting to sync node with API server" Feb 13 16:05:36.426669 kubelet[3418]: I0213 16:05:36.424803 3418 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:05:36.426669 kubelet[3418]: I0213 16:05:36.424888 3418 kubelet.go:314] "Adding apiserver pod source" Feb 13 16:05:36.426669 kubelet[3418]: I0213 16:05:36.424928 3418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:05:36.432368 kubelet[3418]: I0213 16:05:36.430835 3418 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:05:36.435243 kubelet[3418]: I0213 16:05:36.435074 3418 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:05:36.438923 kubelet[3418]: I0213 16:05:36.438869 3418 server.go:1269] "Started kubelet" Feb 13 16:05:36.451035 kubelet[3418]: I0213 16:05:36.450861 3418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:05:36.465272 kubelet[3418]: I0213 16:05:36.459201 3418 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:05:36.494726 kubelet[3418]: I0213 16:05:36.494671 3418 server.go:460] "Adding debug handlers to kubelet server" Feb 13 16:05:36.498356 kubelet[3418]: I0213 16:05:36.459937 3418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 16:05:36.500297 kubelet[3418]: I0213 16:05:36.500217 3418 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:05:36.503409 kubelet[3418]: I0213 16:05:36.500393 3418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:05:36.503409 kubelet[3418]: I0213 16:05:36.464078 3418 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 16:05:36.503409 kubelet[3418]: E0213 16:05:36.464139 3418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-154\" not found" Feb 13 16:05:36.503409 kubelet[3418]: I0213 16:05:36.459493 3418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:05:36.503409 kubelet[3418]: I0213 16:05:36.501580 3418 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:05:36.503409 kubelet[3418]: I0213 16:05:36.464055 3418 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 16:05:36.503409 kubelet[3418]: I0213 16:05:36.493041 3418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:05:36.503409 kubelet[3418]: I0213 16:05:36.503074 3418 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:05:36.507208 kubelet[3418]: I0213 16:05:36.507151 3418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:05:36.507208 kubelet[3418]: I0213 16:05:36.507198 3418 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:05:36.507388 kubelet[3418]: I0213 16:05:36.507231 3418 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 16:05:36.507388 kubelet[3418]: E0213 16:05:36.507307 3418 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:05:36.559098 kubelet[3418]: I0213 16:05:36.556754 3418 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:05:36.568690 kubelet[3418]: E0213 16:05:36.568644 3418 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:05:36.607855 kubelet[3418]: E0213 16:05:36.607798 3418 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:05:36.679569 kubelet[3418]: I0213 16:05:36.679521 3418 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:05:36.679569 kubelet[3418]: I0213 16:05:36.679555 3418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:05:36.679788 kubelet[3418]: I0213 16:05:36.679591 3418 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:36.680139 kubelet[3418]: I0213 16:05:36.679838 3418 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:05:36.680139 kubelet[3418]: I0213 16:05:36.679868 3418 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:05:36.680139 kubelet[3418]: I0213 16:05:36.679906 3418 policy_none.go:49] "None policy: Start" Feb 13 16:05:36.681303 kubelet[3418]: I0213 16:05:36.681262 3418 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:05:36.681425 kubelet[3418]: I0213 16:05:36.681311 3418 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:05:36.682564 kubelet[3418]: I0213 16:05:36.681613 3418 state_mem.go:75] "Updated machine memory state" Feb 13 16:05:36.691805 kubelet[3418]: I0213 16:05:36.691755 3418 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:05:36.692134 kubelet[3418]: I0213 16:05:36.692095 3418 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 16:05:36.692216 kubelet[3418]: I0213 16:05:36.692126 3418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:05:36.695663 kubelet[3418]: I0213 16:05:36.693877 3418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:05:36.818090 kubelet[3418]: I0213 16:05:36.817253 3418 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-154" Feb 13 16:05:36.840839 kubelet[3418]: I0213 16:05:36.840183 3418 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-154" Feb 13 16:05:36.840839 kubelet[3418]: I0213 16:05:36.840378 3418 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-154" Feb 13 16:05:36.906459 kubelet[3418]: I0213 16:05:36.906410 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:36.906787 kubelet[3418]: I0213 16:05:36.906745 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:36.907059 kubelet[3418]: I0213 16:05:36.906982 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:36.907518 kubelet[3418]: I0213 16:05:36.907188 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:36.907518 kubelet[3418]: I0213 16:05:36.907254 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d70c9b3e7bfe38f528c4b3efd0202219-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-154\" (UID: \"d70c9b3e7bfe38f528c4b3efd0202219\") " pod="kube-system/kube-scheduler-ip-172-31-31-154" Feb 13 16:05:36.907518 kubelet[3418]: I0213 16:05:36.907293 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dffdb9b3f9c5e1d8bebcb95b29482f9-ca-certs\") pod \"kube-apiserver-ip-172-31-31-154\" (UID: \"6dffdb9b3f9c5e1d8bebcb95b29482f9\") " pod="kube-system/kube-apiserver-ip-172-31-31-154" Feb 13 16:05:36.907518 kubelet[3418]: I0213 16:05:36.907333 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dffdb9b3f9c5e1d8bebcb95b29482f9-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-154\" (UID: \"6dffdb9b3f9c5e1d8bebcb95b29482f9\") " pod="kube-system/kube-apiserver-ip-172-31-31-154" Feb 13 16:05:36.907518 kubelet[3418]: I0213 16:05:36.907374 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dffdb9b3f9c5e1d8bebcb95b29482f9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-154\" (UID: \"6dffdb9b3f9c5e1d8bebcb95b29482f9\") " pod="kube-system/kube-apiserver-ip-172-31-31-154" Feb 13 16:05:36.907782 kubelet[3418]: I0213 16:05:36.907430 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d65d00bbf16f0f663e86ccc8c402feb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-154\" (UID: \"8d65d00bbf16f0f663e86ccc8c402feb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-154" Feb 13 16:05:37.426713 kubelet[3418]: I0213 16:05:37.426655 3418 apiserver.go:52] "Watching apiserver" Feb 13 16:05:37.501142 kubelet[3418]: I0213 16:05:37.501070 3418 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 16:05:37.734312 kubelet[3418]: I0213 16:05:37.734032 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-154" podStartSLOduration=1.733965078 podStartE2EDuration="1.733965078s" podCreationTimestamp="2025-02-13 16:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:37.729456006 +0000 UTC m=+1.454675828" watchObservedRunningTime="2025-02-13 16:05:37.733965078 +0000 UTC m=+1.459184888" Feb 13 16:05:37.833494 kubelet[3418]: I0213 16:05:37.833270 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-154" podStartSLOduration=1.833239591 podStartE2EDuration="1.833239591s" podCreationTimestamp="2025-02-13 16:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:37.782388415 +0000 UTC m=+1.507608249" watchObservedRunningTime="2025-02-13 16:05:37.833239591 +0000 UTC m=+1.558459401" Feb 13 16:05:37.871183 kubelet[3418]: I0213 16:05:37.870771 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-154" podStartSLOduration=1.8707523830000001 podStartE2EDuration="1.870752383s" podCreationTimestamp="2025-02-13 16:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:37.835581895 +0000 UTC m=+1.560801801" watchObservedRunningTime="2025-02-13 16:05:37.870752383 +0000 UTC m=+1.595972193" Feb 13 16:05:41.872660 kubelet[3418]: I0213 16:05:41.872327 3418 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:05:41.874553 kubelet[3418]: I0213 16:05:41.873391 3418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:05:41.874655 containerd[2027]: time="2025-02-13T16:05:41.873024371Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:05:42.391582 systemd[1]: Created slice kubepods-besteffort-pod71ea8b1b_82a0_4055_98f7_64c0bb7080fd.slice - libcontainer container kubepods-besteffort-pod71ea8b1b_82a0_4055_98f7_64c0bb7080fd.slice. Feb 13 16:05:42.442642 kubelet[3418]: I0213 16:05:42.442526 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71ea8b1b-82a0-4055-98f7-64c0bb7080fd-kube-proxy\") pod \"kube-proxy-dv587\" (UID: \"71ea8b1b-82a0-4055-98f7-64c0bb7080fd\") " pod="kube-system/kube-proxy-dv587" Feb 13 16:05:42.442939 kubelet[3418]: I0213 16:05:42.442639 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttd2k\" (UniqueName: \"kubernetes.io/projected/71ea8b1b-82a0-4055-98f7-64c0bb7080fd-kube-api-access-ttd2k\") pod \"kube-proxy-dv587\" (UID: \"71ea8b1b-82a0-4055-98f7-64c0bb7080fd\") " pod="kube-system/kube-proxy-dv587" Feb 13 16:05:42.442939 kubelet[3418]: I0213 16:05:42.442729 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71ea8b1b-82a0-4055-98f7-64c0bb7080fd-xtables-lock\") pod \"kube-proxy-dv587\" (UID: \"71ea8b1b-82a0-4055-98f7-64c0bb7080fd\") " pod="kube-system/kube-proxy-dv587" Feb 13 16:05:42.442939 kubelet[3418]: I0213 16:05:42.442777 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71ea8b1b-82a0-4055-98f7-64c0bb7080fd-lib-modules\") pod \"kube-proxy-dv587\" (UID: \"71ea8b1b-82a0-4055-98f7-64c0bb7080fd\") " pod="kube-system/kube-proxy-dv587" Feb 13 16:05:42.557217 kubelet[3418]: E0213 16:05:42.557164 3418 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 16:05:42.557217 kubelet[3418]: E0213 16:05:42.557217 3418 projected.go:194] Error preparing data for projected volume kube-api-access-ttd2k for pod kube-system/kube-proxy-dv587: configmap "kube-root-ca.crt" not found Feb 13 16:05:42.557449 kubelet[3418]: E0213 16:05:42.557319 3418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71ea8b1b-82a0-4055-98f7-64c0bb7080fd-kube-api-access-ttd2k podName:71ea8b1b-82a0-4055-98f7-64c0bb7080fd nodeName:}" failed. No retries permitted until 2025-02-13 16:05:43.057284486 +0000 UTC m=+6.782504296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ttd2k" (UniqueName: "kubernetes.io/projected/71ea8b1b-82a0-4055-98f7-64c0bb7080fd-kube-api-access-ttd2k") pod "kube-proxy-dv587" (UID: "71ea8b1b-82a0-4055-98f7-64c0bb7080fd") : configmap "kube-root-ca.crt" not found Feb 13 16:05:42.930179 systemd[1]: Created slice kubepods-besteffort-pod0d1247ea_ec9b_4dd6_9809_fa6f4d21aa64.slice - libcontainer container kubepods-besteffort-pod0d1247ea_ec9b_4dd6_9809_fa6f4d21aa64.slice. Feb 13 16:05:42.947511 kubelet[3418]: I0213 16:05:42.947450 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvgjl\" (UniqueName: \"kubernetes.io/projected/0d1247ea-ec9b-4dd6-9809-fa6f4d21aa64-kube-api-access-nvgjl\") pod \"tigera-operator-76c4976dd7-62q5g\" (UID: \"0d1247ea-ec9b-4dd6-9809-fa6f4d21aa64\") " pod="tigera-operator/tigera-operator-76c4976dd7-62q5g" Feb 13 16:05:42.948144 kubelet[3418]: I0213 16:05:42.948095 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d1247ea-ec9b-4dd6-9809-fa6f4d21aa64-var-lib-calico\") pod \"tigera-operator-76c4976dd7-62q5g\" (UID: \"0d1247ea-ec9b-4dd6-9809-fa6f4d21aa64\") " pod="tigera-operator/tigera-operator-76c4976dd7-62q5g" Feb 13 16:05:43.238636 containerd[2027]: time="2025-02-13T16:05:43.238561810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-62q5g,Uid:0d1247ea-ec9b-4dd6-9809-fa6f4d21aa64,Namespace:tigera-operator,Attempt:0,}" Feb 13 16:05:43.280855 containerd[2027]: time="2025-02-13T16:05:43.280687966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:43.281059 containerd[2027]: time="2025-02-13T16:05:43.280885210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:43.281059 containerd[2027]: time="2025-02-13T16:05:43.280946566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:43.281686 containerd[2027]: time="2025-02-13T16:05:43.281536726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:43.307418 containerd[2027]: time="2025-02-13T16:05:43.306874114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dv587,Uid:71ea8b1b-82a0-4055-98f7-64c0bb7080fd,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:43.323309 systemd[1]: Started cri-containerd-9b9bc12aca6302de5963353b16388f8e651c780a46f18c3a2e5e5c7b805f577e.scope - libcontainer container 9b9bc12aca6302de5963353b16388f8e651c780a46f18c3a2e5e5c7b805f577e. Feb 13 16:05:43.359983 containerd[2027]: time="2025-02-13T16:05:43.359470054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:43.359983 containerd[2027]: time="2025-02-13T16:05:43.359725390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:43.360448 containerd[2027]: time="2025-02-13T16:05:43.359847814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:43.362840 containerd[2027]: time="2025-02-13T16:05:43.362556946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:43.409463 systemd[1]: Started cri-containerd-2961d07cd5d9bf908bbdbd2643d4fdb856a1399d3ca96ff33bf910d68b92b7e0.scope - libcontainer container 2961d07cd5d9bf908bbdbd2643d4fdb856a1399d3ca96ff33bf910d68b92b7e0. Feb 13 16:05:43.422591 containerd[2027]: time="2025-02-13T16:05:43.422264063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-62q5g,Uid:0d1247ea-ec9b-4dd6-9809-fa6f4d21aa64,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9b9bc12aca6302de5963353b16388f8e651c780a46f18c3a2e5e5c7b805f577e\"" Feb 13 16:05:43.427970 containerd[2027]: time="2025-02-13T16:05:43.426853943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 16:05:43.466885 containerd[2027]: time="2025-02-13T16:05:43.466804667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dv587,Uid:71ea8b1b-82a0-4055-98f7-64c0bb7080fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2961d07cd5d9bf908bbdbd2643d4fdb856a1399d3ca96ff33bf910d68b92b7e0\"" Feb 13 16:05:43.471501 containerd[2027]: time="2025-02-13T16:05:43.471448703Z" level=info msg="CreateContainer within sandbox \"2961d07cd5d9bf908bbdbd2643d4fdb856a1399d3ca96ff33bf910d68b92b7e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:05:43.501356 containerd[2027]: time="2025-02-13T16:05:43.498179435Z" level=info msg="CreateContainer within sandbox \"2961d07cd5d9bf908bbdbd2643d4fdb856a1399d3ca96ff33bf910d68b92b7e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bb4ac47501d588d6b00111bbb87862199ea7b1cc741ec1da88c02d1caef42800\"" Feb 13 16:05:43.501356 containerd[2027]: time="2025-02-13T16:05:43.499186379Z" level=info msg="StartContainer for \"bb4ac47501d588d6b00111bbb87862199ea7b1cc741ec1da88c02d1caef42800\"" Feb 13 16:05:43.563102 systemd[1]: Started cri-containerd-bb4ac47501d588d6b00111bbb87862199ea7b1cc741ec1da88c02d1caef42800.scope - libcontainer container bb4ac47501d588d6b00111bbb87862199ea7b1cc741ec1da88c02d1caef42800. Feb 13 16:05:43.649377 containerd[2027]: time="2025-02-13T16:05:43.649285392Z" level=info msg="StartContainer for \"bb4ac47501d588d6b00111bbb87862199ea7b1cc741ec1da88c02d1caef42800\" returns successfully" Feb 13 16:05:43.813801 sudo[2370]: pam_unix(sudo:session): session closed for user root Feb 13 16:05:43.837892 sshd[2355]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:43.848159 systemd[1]: sshd@8-172.31.31.154:22-139.178.68.195:49762.service: Deactivated successfully. Feb 13 16:05:43.853803 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:05:43.855209 systemd[1]: session-9.scope: Consumed 9.118s CPU time, 149.2M memory peak, 0B memory swap peak. Feb 13 16:05:43.861523 systemd-logind[2001]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:05:43.866368 systemd-logind[2001]: Removed session 9. Feb 13 16:05:44.861341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount388851414.mount: Deactivated successfully. Feb 13 16:05:45.000468 kubelet[3418]: I0213 16:05:44.998924 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dv587" podStartSLOduration=2.998900775 podStartE2EDuration="2.998900775s" podCreationTimestamp="2025-02-13 16:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:44.679054405 +0000 UTC m=+8.404274239" watchObservedRunningTime="2025-02-13 16:05:44.998900775 +0000 UTC m=+8.724120585" Feb 13 16:05:45.603268 containerd[2027]: time="2025-02-13T16:05:45.603200138Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:45.604892 containerd[2027]: time="2025-02-13T16:05:45.604824278Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 16:05:45.605726 containerd[2027]: time="2025-02-13T16:05:45.605632946Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:45.611116 containerd[2027]: time="2025-02-13T16:05:45.611021654Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:45.613065 containerd[2027]: time="2025-02-13T16:05:45.612552290Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.185631903s" Feb 13 16:05:45.613065 containerd[2027]: time="2025-02-13T16:05:45.612608210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 16:05:45.618642 containerd[2027]: time="2025-02-13T16:05:45.618509282Z" level=info msg="CreateContainer within sandbox \"9b9bc12aca6302de5963353b16388f8e651c780a46f18c3a2e5e5c7b805f577e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 16:05:45.640783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480254951.mount: Deactivated successfully. Feb 13 16:05:45.641510 containerd[2027]: time="2025-02-13T16:05:45.641323430Z" level=info msg="CreateContainer within sandbox \"9b9bc12aca6302de5963353b16388f8e651c780a46f18c3a2e5e5c7b805f577e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f\"" Feb 13 16:05:45.643919 containerd[2027]: time="2025-02-13T16:05:45.642385166Z" level=info msg="StartContainer for \"27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f\"" Feb 13 16:05:45.720203 systemd[1]: Started cri-containerd-27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f.scope - libcontainer container 27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f. Feb 13 16:05:45.774205 containerd[2027]: time="2025-02-13T16:05:45.774142562Z" level=info msg="StartContainer for \"27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f\" returns successfully" Feb 13 16:05:50.550159 kubelet[3418]: I0213 16:05:50.549962 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-62q5g" podStartSLOduration=6.361030371 podStartE2EDuration="8.549937902s" podCreationTimestamp="2025-02-13 16:05:42 +0000 UTC" firstStartedPulling="2025-02-13 16:05:43.425576447 +0000 UTC m=+7.150796257" lastFinishedPulling="2025-02-13 16:05:45.614483978 +0000 UTC m=+9.339703788" observedRunningTime="2025-02-13 16:05:46.695779167 +0000 UTC m=+10.420999001" watchObservedRunningTime="2025-02-13 16:05:50.549937902 +0000 UTC m=+14.275157724" Feb 13 16:05:50.572410 kubelet[3418]: W0213 16:05:50.570541 3418 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-154" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-154' and this object Feb 13 16:05:50.572410 kubelet[3418]: E0213 16:05:50.570656 3418 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-31-154\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-31-154' and this object" logger="UnhandledError" Feb 13 16:05:50.572410 kubelet[3418]: W0213 16:05:50.570748 3418 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-31-154" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-154' and this object Feb 13 16:05:50.572410 kubelet[3418]: E0213 16:05:50.570774 3418 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-31-154\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-31-154' and this object" logger="UnhandledError" Feb 13 16:05:50.572410 kubelet[3418]: W0213 16:05:50.570840 3418 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-31-154" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-154' and this object Feb 13 16:05:50.572942 kubelet[3418]: E0213 16:05:50.570864 3418 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-31-154\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-31-154' and this object" logger="UnhandledError" Feb 13 16:05:50.573093 systemd[1]: Created slice kubepods-besteffort-pod596d77c2_5188_4e2e_b568_9bd07573d99a.slice - libcontainer container kubepods-besteffort-pod596d77c2_5188_4e2e_b568_9bd07573d99a.slice. Feb 13 16:05:50.603426 kubelet[3418]: I0213 16:05:50.602315 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86wn\" (UniqueName: \"kubernetes.io/projected/596d77c2-5188-4e2e-b568-9bd07573d99a-kube-api-access-x86wn\") pod \"calico-typha-7c495b57bd-w2xhn\" (UID: \"596d77c2-5188-4e2e-b568-9bd07573d99a\") " pod="calico-system/calico-typha-7c495b57bd-w2xhn" Feb 13 16:05:50.603426 kubelet[3418]: I0213 16:05:50.602387 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/596d77c2-5188-4e2e-b568-9bd07573d99a-typha-certs\") pod \"calico-typha-7c495b57bd-w2xhn\" (UID: \"596d77c2-5188-4e2e-b568-9bd07573d99a\") " pod="calico-system/calico-typha-7c495b57bd-w2xhn" Feb 13 16:05:50.603426 kubelet[3418]: I0213 16:05:50.602431 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/596d77c2-5188-4e2e-b568-9bd07573d99a-tigera-ca-bundle\") pod \"calico-typha-7c495b57bd-w2xhn\" (UID: \"596d77c2-5188-4e2e-b568-9bd07573d99a\") " pod="calico-system/calico-typha-7c495b57bd-w2xhn" Feb 13 16:05:50.908164 kubelet[3418]: I0213 16:05:50.907011 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-var-run-calico\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908164 kubelet[3418]: I0213 16:05:50.907076 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-cni-net-dir\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908164 kubelet[3418]: I0213 16:05:50.907114 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-cni-log-dir\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908164 kubelet[3418]: I0213 16:05:50.907153 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-cni-bin-dir\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908164 kubelet[3418]: I0213 16:05:50.907202 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-xtables-lock\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908519 kubelet[3418]: I0213 16:05:50.907241 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84c4358a-bad6-4f54-b585-f66e4394a9a4-tigera-ca-bundle\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908519 kubelet[3418]: I0213 16:05:50.907280 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/84c4358a-bad6-4f54-b585-f66e4394a9a4-node-certs\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908519 kubelet[3418]: I0213 16:05:50.907317 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2pjr\" (UniqueName: \"kubernetes.io/projected/84c4358a-bad6-4f54-b585-f66e4394a9a4-kube-api-access-f2pjr\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908519 kubelet[3418]: I0213 16:05:50.907366 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-flexvol-driver-host\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908519 kubelet[3418]: I0213 16:05:50.907408 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-lib-modules\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908771 kubelet[3418]: I0213 16:05:50.907467 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-var-lib-calico\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.908771 kubelet[3418]: I0213 16:05:50.907516 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/84c4358a-bad6-4f54-b585-f66e4394a9a4-policysync\") pod \"calico-node-fxbcc\" (UID: \"84c4358a-bad6-4f54-b585-f66e4394a9a4\") " pod="calico-system/calico-node-fxbcc" Feb 13 16:05:50.909928 systemd[1]: Created slice kubepods-besteffort-pod84c4358a_bad6_4f54_b585_f66e4394a9a4.slice - libcontainer container kubepods-besteffort-pod84c4358a_bad6_4f54_b585_f66e4394a9a4.slice. Feb 13 16:05:51.034197 kubelet[3418]: E0213 16:05:51.034139 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.034197 kubelet[3418]: W0213 16:05:51.034181 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.034372 kubelet[3418]: E0213 16:05:51.034234 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.055051 kubelet[3418]: E0213 16:05:51.054941 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:05:51.097742 kubelet[3418]: E0213 16:05:51.096835 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.097948 kubelet[3418]: W0213 16:05:51.097757 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.097948 kubelet[3418]: E0213 16:05:51.097837 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.101033 kubelet[3418]: E0213 16:05:51.099749 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.101033 kubelet[3418]: W0213 16:05:51.100113 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.101033 kubelet[3418]: E0213 16:05:51.100176 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.103033 kubelet[3418]: E0213 16:05:51.102902 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.103033 kubelet[3418]: W0213 16:05:51.102974 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.103280 kubelet[3418]: E0213 16:05:51.103074 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.108915 kubelet[3418]: E0213 16:05:51.108747 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.108915 kubelet[3418]: W0213 16:05:51.108833 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.109152 kubelet[3418]: E0213 16:05:51.109021 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.110202 kubelet[3418]: E0213 16:05:51.110125 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.110202 kubelet[3418]: W0213 16:05:51.110182 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.110455 kubelet[3418]: E0213 16:05:51.110236 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.111545 kubelet[3418]: E0213 16:05:51.111490 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.111751 kubelet[3418]: W0213 16:05:51.111529 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.111751 kubelet[3418]: E0213 16:05:51.111711 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.113647 kubelet[3418]: E0213 16:05:51.113377 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.113647 kubelet[3418]: W0213 16:05:51.113416 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.113647 kubelet[3418]: E0213 16:05:51.113452 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.115780 kubelet[3418]: E0213 16:05:51.115719 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.115780 kubelet[3418]: W0213 16:05:51.115767 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.116506 kubelet[3418]: E0213 16:05:51.116190 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.118381 kubelet[3418]: E0213 16:05:51.118319 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.118381 kubelet[3418]: W0213 16:05:51.118372 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.118631 kubelet[3418]: E0213 16:05:51.118412 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.120507 kubelet[3418]: E0213 16:05:51.120450 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.120507 kubelet[3418]: W0213 16:05:51.120492 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.120727 kubelet[3418]: E0213 16:05:51.120528 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.122113 kubelet[3418]: E0213 16:05:51.122007 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.122113 kubelet[3418]: W0213 16:05:51.122097 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.122400 kubelet[3418]: E0213 16:05:51.122150 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.123455 kubelet[3418]: E0213 16:05:51.123164 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.123455 kubelet[3418]: W0213 16:05:51.123198 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.123455 kubelet[3418]: E0213 16:05:51.123230 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.123911 kubelet[3418]: E0213 16:05:51.123879 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.124281 kubelet[3418]: W0213 16:05:51.124060 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.124281 kubelet[3418]: E0213 16:05:51.124103 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.125457 kubelet[3418]: E0213 16:05:51.125186 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.125457 kubelet[3418]: W0213 16:05:51.125225 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.125457 kubelet[3418]: E0213 16:05:51.125258 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.127337 kubelet[3418]: E0213 16:05:51.127281 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.127537 kubelet[3418]: W0213 16:05:51.127509 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.127654 kubelet[3418]: E0213 16:05:51.127628 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.128549 kubelet[3418]: E0213 16:05:51.128214 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.128549 kubelet[3418]: W0213 16:05:51.128243 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.128549 kubelet[3418]: E0213 16:05:51.128335 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.128957 kubelet[3418]: E0213 16:05:51.128931 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.129130 kubelet[3418]: W0213 16:05:51.129099 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.129247 kubelet[3418]: E0213 16:05:51.129222 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.129780 kubelet[3418]: E0213 16:05:51.129750 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.129919 kubelet[3418]: W0213 16:05:51.129894 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.130249 kubelet[3418]: E0213 16:05:51.130056 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.130657 kubelet[3418]: E0213 16:05:51.130463 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.130657 kubelet[3418]: W0213 16:05:51.130488 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.130657 kubelet[3418]: E0213 16:05:51.130512 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.132034 kubelet[3418]: E0213 16:05:51.130929 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.132287 kubelet[3418]: W0213 16:05:51.132241 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.132466 kubelet[3418]: E0213 16:05:51.132438 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.134681 kubelet[3418]: E0213 16:05:51.134309 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.134681 kubelet[3418]: W0213 16:05:51.134362 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.134681 kubelet[3418]: E0213 16:05:51.134395 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.134681 kubelet[3418]: I0213 16:05:51.134455 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzck\" (UniqueName: \"kubernetes.io/projected/a33f313d-041d-426a-9914-5ac8d0c42820-kube-api-access-8fzck\") pod \"csi-node-driver-t4668\" (UID: \"a33f313d-041d-426a-9914-5ac8d0c42820\") " pod="calico-system/csi-node-driver-t4668" Feb 13 16:05:51.137717 kubelet[3418]: E0213 16:05:51.135838 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.137717 kubelet[3418]: W0213 16:05:51.135872 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.137717 kubelet[3418]: E0213 16:05:51.135919 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.138511 kubelet[3418]: E0213 16:05:51.138313 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.138511 kubelet[3418]: W0213 16:05:51.138347 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.138511 kubelet[3418]: E0213 16:05:51.138381 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.140233 kubelet[3418]: E0213 16:05:51.139649 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.140233 kubelet[3418]: W0213 16:05:51.139685 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.140233 kubelet[3418]: E0213 16:05:51.139774 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.140233 kubelet[3418]: I0213 16:05:51.139839 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a33f313d-041d-426a-9914-5ac8d0c42820-socket-dir\") pod \"csi-node-driver-t4668\" (UID: \"a33f313d-041d-426a-9914-5ac8d0c42820\") " pod="calico-system/csi-node-driver-t4668" Feb 13 16:05:51.141857 kubelet[3418]: E0213 16:05:51.141562 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.141857 kubelet[3418]: W0213 16:05:51.141621 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.141857 kubelet[3418]: E0213 16:05:51.141672 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.142721 kubelet[3418]: E0213 16:05:51.142512 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.142721 kubelet[3418]: W0213 16:05:51.142544 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.142721 kubelet[3418]: E0213 16:05:51.142612 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.144428 kubelet[3418]: E0213 16:05:51.144174 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.144428 kubelet[3418]: W0213 16:05:51.144209 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.144428 kubelet[3418]: E0213 16:05:51.144276 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.145094 kubelet[3418]: E0213 16:05:51.144974 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.145247 kubelet[3418]: W0213 16:05:51.145218 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.145838 kubelet[3418]: E0213 16:05:51.145354 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.146241 kubelet[3418]: E0213 16:05:51.146197 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.146241 kubelet[3418]: W0213 16:05:51.146233 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.147075 kubelet[3418]: E0213 16:05:51.146927 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.147499 kubelet[3418]: E0213 16:05:51.147459 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.147499 kubelet[3418]: W0213 16:05:51.147492 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.147641 kubelet[3418]: E0213 16:05:51.147522 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.149315 kubelet[3418]: E0213 16:05:51.149257 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.149315 kubelet[3418]: W0213 16:05:51.149300 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.149543 kubelet[3418]: E0213 16:05:51.149335 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.149543 kubelet[3418]: I0213 16:05:51.149410 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a33f313d-041d-426a-9914-5ac8d0c42820-varrun\") pod \"csi-node-driver-t4668\" (UID: \"a33f313d-041d-426a-9914-5ac8d0c42820\") " pod="calico-system/csi-node-driver-t4668" Feb 13 16:05:51.150260 kubelet[3418]: E0213 16:05:51.150208 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.150260 kubelet[3418]: W0213 16:05:51.150246 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.150424 kubelet[3418]: E0213 16:05:51.150288 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.150424 kubelet[3418]: I0213 16:05:51.150331 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a33f313d-041d-426a-9914-5ac8d0c42820-registration-dir\") pod \"csi-node-driver-t4668\" (UID: \"a33f313d-041d-426a-9914-5ac8d0c42820\") " pod="calico-system/csi-node-driver-t4668" Feb 13 16:05:51.151983 kubelet[3418]: E0213 16:05:51.151928 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.152251 kubelet[3418]: W0213 16:05:51.151971 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.152251 kubelet[3418]: E0213 16:05:51.152074 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.153209 kubelet[3418]: E0213 16:05:51.153154 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.153209 kubelet[3418]: W0213 16:05:51.153193 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.153588 kubelet[3418]: E0213 16:05:51.153434 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.153652 kubelet[3418]: E0213 16:05:51.153623 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.153652 kubelet[3418]: W0213 16:05:51.153640 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.153832 kubelet[3418]: E0213 16:05:51.153806 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.155248 kubelet[3418]: E0213 16:05:51.155194 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.155248 kubelet[3418]: W0213 16:05:51.155234 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.155564 kubelet[3418]: E0213 16:05:51.155381 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.155564 kubelet[3418]: I0213 16:05:51.155436 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a33f313d-041d-426a-9914-5ac8d0c42820-kubelet-dir\") pod \"csi-node-driver-t4668\" (UID: \"a33f313d-041d-426a-9914-5ac8d0c42820\") " pod="calico-system/csi-node-driver-t4668" Feb 13 16:05:51.156031 kubelet[3418]: E0213 16:05:51.155629 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.156031 kubelet[3418]: W0213 16:05:51.155645 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.156031 kubelet[3418]: E0213 16:05:51.155667 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.156194 kubelet[3418]: E0213 16:05:51.156106 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.156194 kubelet[3418]: W0213 16:05:51.156127 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.156194 kubelet[3418]: E0213 16:05:51.156157 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.157405 kubelet[3418]: E0213 16:05:51.157308 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.157405 kubelet[3418]: W0213 16:05:51.157348 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.157405 kubelet[3418]: E0213 16:05:51.157382 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.159777 kubelet[3418]: E0213 16:05:51.158772 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.159777 kubelet[3418]: W0213 16:05:51.158812 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.159777 kubelet[3418]: E0213 16:05:51.158847 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.261521 kubelet[3418]: E0213 16:05:51.260970 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.261521 kubelet[3418]: W0213 16:05:51.261055 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.261521 kubelet[3418]: E0213 16:05:51.261088 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.263026 kubelet[3418]: E0213 16:05:51.262786 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.263026 kubelet[3418]: W0213 16:05:51.262817 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.263026 kubelet[3418]: E0213 16:05:51.262867 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.264131 kubelet[3418]: E0213 16:05:51.263664 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.264131 kubelet[3418]: W0213 16:05:51.263693 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.264131 kubelet[3418]: E0213 16:05:51.263761 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.264745 kubelet[3418]: E0213 16:05:51.264520 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.264745 kubelet[3418]: W0213 16:05:51.264546 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.264745 kubelet[3418]: E0213 16:05:51.264585 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.265877 kubelet[3418]: E0213 16:05:51.265446 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.265877 kubelet[3418]: W0213 16:05:51.265476 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.265877 kubelet[3418]: E0213 16:05:51.265551 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.266939 kubelet[3418]: E0213 16:05:51.266712 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.266939 kubelet[3418]: W0213 16:05:51.266743 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.266939 kubelet[3418]: E0213 16:05:51.266822 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.267865 kubelet[3418]: E0213 16:05:51.267666 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.267865 kubelet[3418]: W0213 16:05:51.267698 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.267865 kubelet[3418]: E0213 16:05:51.267741 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.268426 kubelet[3418]: E0213 16:05:51.268385 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.268426 kubelet[3418]: W0213 16:05:51.268418 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.268595 kubelet[3418]: E0213 16:05:51.268458 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.269433 kubelet[3418]: E0213 16:05:51.269395 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.269433 kubelet[3418]: W0213 16:05:51.269430 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.269828 kubelet[3418]: E0213 16:05:51.269471 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.271422 kubelet[3418]: E0213 16:05:51.270386 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.271848 kubelet[3418]: W0213 16:05:51.271646 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.272099 kubelet[3418]: E0213 16:05:51.271942 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.272298 kubelet[3418]: E0213 16:05:51.272278 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.272501 kubelet[3418]: W0213 16:05:51.272373 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.272501 kubelet[3418]: E0213 16:05:51.272407 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.273087 kubelet[3418]: E0213 16:05:51.273031 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.273087 kubelet[3418]: W0213 16:05:51.273057 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.273520 kubelet[3418]: E0213 16:05:51.273295 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.273768 kubelet[3418]: E0213 16:05:51.273745 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.274174 kubelet[3418]: W0213 16:05:51.273910 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.274174 kubelet[3418]: E0213 16:05:51.274132 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.275182 kubelet[3418]: E0213 16:05:51.275155 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.275420 kubelet[3418]: W0213 16:05:51.275313 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.275420 kubelet[3418]: E0213 16:05:51.275389 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.277246 kubelet[3418]: E0213 16:05:51.275825 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.277246 kubelet[3418]: W0213 16:05:51.275846 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.277246 kubelet[3418]: E0213 16:05:51.275895 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.277502 kubelet[3418]: E0213 16:05:51.277312 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.277502 kubelet[3418]: W0213 16:05:51.277339 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.277622 kubelet[3418]: E0213 16:05:51.277505 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.278222 kubelet[3418]: E0213 16:05:51.278177 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.278222 kubelet[3418]: W0213 16:05:51.278211 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.278556 kubelet[3418]: E0213 16:05:51.278451 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.278631 kubelet[3418]: E0213 16:05:51.278607 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.278631 kubelet[3418]: W0213 16:05:51.278623 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.278937 kubelet[3418]: E0213 16:05:51.278654 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.279295 kubelet[3418]: E0213 16:05:51.279264 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.279583 kubelet[3418]: W0213 16:05:51.279415 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.279583 kubelet[3418]: E0213 16:05:51.279467 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.280076 kubelet[3418]: E0213 16:05:51.280021 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.280076 kubelet[3418]: W0213 16:05:51.280057 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.280076 kubelet[3418]: E0213 16:05:51.280100 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.282158 kubelet[3418]: E0213 16:05:51.281592 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.282158 kubelet[3418]: W0213 16:05:51.282157 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.282435 kubelet[3418]: E0213 16:05:51.282292 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.282853 kubelet[3418]: E0213 16:05:51.282815 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.282853 kubelet[3418]: W0213 16:05:51.282846 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.287028 kubelet[3418]: E0213 16:05:51.284874 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.287570 kubelet[3418]: E0213 16:05:51.287305 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.287570 kubelet[3418]: W0213 16:05:51.287338 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.287570 kubelet[3418]: E0213 16:05:51.287382 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.287922 kubelet[3418]: E0213 16:05:51.287901 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.288065 kubelet[3418]: W0213 16:05:51.288042 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.288307 kubelet[3418]: E0213 16:05:51.288177 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.288527 kubelet[3418]: E0213 16:05:51.288505 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.288680 kubelet[3418]: W0213 16:05:51.288605 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.288680 kubelet[3418]: E0213 16:05:51.288646 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.289022 kubelet[3418]: E0213 16:05:51.288974 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.289145 kubelet[3418]: W0213 16:05:51.289037 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.289145 kubelet[3418]: E0213 16:05:51.289075 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.289478 kubelet[3418]: E0213 16:05:51.289442 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.289478 kubelet[3418]: W0213 16:05:51.289469 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.289636 kubelet[3418]: E0213 16:05:51.289504 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.289866 kubelet[3418]: E0213 16:05:51.289836 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.289866 kubelet[3418]: W0213 16:05:51.289862 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.290213 kubelet[3418]: E0213 16:05:51.289888 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.291181 kubelet[3418]: E0213 16:05:51.291116 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.291181 kubelet[3418]: W0213 16:05:51.291156 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.291418 kubelet[3418]: E0213 16:05:51.291191 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.292798 kubelet[3418]: E0213 16:05:51.292647 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.292798 kubelet[3418]: W0213 16:05:51.292681 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.292798 kubelet[3418]: E0213 16:05:51.292714 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.386517 kubelet[3418]: E0213 16:05:51.385891 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.386517 kubelet[3418]: W0213 16:05:51.385927 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.386517 kubelet[3418]: E0213 16:05:51.385957 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.387770 kubelet[3418]: E0213 16:05:51.387056 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.388308 kubelet[3418]: W0213 16:05:51.388035 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.388308 kubelet[3418]: E0213 16:05:51.388086 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.389058 kubelet[3418]: E0213 16:05:51.388776 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.389058 kubelet[3418]: W0213 16:05:51.388808 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.389058 kubelet[3418]: E0213 16:05:51.388857 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.390087 kubelet[3418]: E0213 16:05:51.389687 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.390087 kubelet[3418]: W0213 16:05:51.389718 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.390087 kubelet[3418]: E0213 16:05:51.389749 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.390758 kubelet[3418]: E0213 16:05:51.390728 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.390949 kubelet[3418]: W0213 16:05:51.390908 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.391193 kubelet[3418]: E0213 16:05:51.391157 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.391794 kubelet[3418]: E0213 16:05:51.391735 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.392148 kubelet[3418]: W0213 16:05:51.391946 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.392148 kubelet[3418]: E0213 16:05:51.392068 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.494501 kubelet[3418]: E0213 16:05:51.494445 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.494501 kubelet[3418]: W0213 16:05:51.494485 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.494695 kubelet[3418]: E0213 16:05:51.494519 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.495307 kubelet[3418]: E0213 16:05:51.495271 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.495307 kubelet[3418]: W0213 16:05:51.495304 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.495500 kubelet[3418]: E0213 16:05:51.495332 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.495845 kubelet[3418]: E0213 16:05:51.495812 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.496166 kubelet[3418]: W0213 16:05:51.495844 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.496166 kubelet[3418]: E0213 16:05:51.495873 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.496616 kubelet[3418]: E0213 16:05:51.496507 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.496791 kubelet[3418]: W0213 16:05:51.496760 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.496870 kubelet[3418]: E0213 16:05:51.496822 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.497369 kubelet[3418]: E0213 16:05:51.497339 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.497369 kubelet[3418]: W0213 16:05:51.497366 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.497514 kubelet[3418]: E0213 16:05:51.497390 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.497832 kubelet[3418]: E0213 16:05:51.497802 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.497933 kubelet[3418]: W0213 16:05:51.497832 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.497933 kubelet[3418]: E0213 16:05:51.497894 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.599200 kubelet[3418]: E0213 16:05:51.599058 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.599200 kubelet[3418]: W0213 16:05:51.599102 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.599200 kubelet[3418]: E0213 16:05:51.599162 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.600271 kubelet[3418]: E0213 16:05:51.599686 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.600271 kubelet[3418]: W0213 16:05:51.599707 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.600271 kubelet[3418]: E0213 16:05:51.599730 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.600271 kubelet[3418]: E0213 16:05:51.600081 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.600271 kubelet[3418]: W0213 16:05:51.600101 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.600271 kubelet[3418]: E0213 16:05:51.600122 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.600678 kubelet[3418]: E0213 16:05:51.600430 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.600678 kubelet[3418]: W0213 16:05:51.600446 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.600678 kubelet[3418]: E0213 16:05:51.600464 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.600947 kubelet[3418]: E0213 16:05:51.600753 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.600947 kubelet[3418]: W0213 16:05:51.600769 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.600947 kubelet[3418]: E0213 16:05:51.600787 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.601274 kubelet[3418]: E0213 16:05:51.601176 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.601274 kubelet[3418]: W0213 16:05:51.601196 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.601274 kubelet[3418]: E0213 16:05:51.601216 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.702718 kubelet[3418]: E0213 16:05:51.702683 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.703083 kubelet[3418]: W0213 16:05:51.702906 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.703083 kubelet[3418]: E0213 16:05:51.702945 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.704067 kubelet[3418]: E0213 16:05:51.703331 3418 secret.go:188] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 16:05:51.704067 kubelet[3418]: E0213 16:05:51.703418 3418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/596d77c2-5188-4e2e-b568-9bd07573d99a-typha-certs podName:596d77c2-5188-4e2e-b568-9bd07573d99a nodeName:}" failed. No retries permitted until 2025-02-13 16:05:52.20339298 +0000 UTC m=+15.928612790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/596d77c2-5188-4e2e-b568-9bd07573d99a-typha-certs") pod "calico-typha-7c495b57bd-w2xhn" (UID: "596d77c2-5188-4e2e-b568-9bd07573d99a") : failed to sync secret cache: timed out waiting for the condition Feb 13 16:05:51.704067 kubelet[3418]: E0213 16:05:51.703451 3418 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 13 16:05:51.704067 kubelet[3418]: E0213 16:05:51.703503 3418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/596d77c2-5188-4e2e-b568-9bd07573d99a-tigera-ca-bundle podName:596d77c2-5188-4e2e-b568-9bd07573d99a nodeName:}" failed. No retries permitted until 2025-02-13 16:05:52.203484576 +0000 UTC m=+15.928704374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/596d77c2-5188-4e2e-b568-9bd07573d99a-tigera-ca-bundle") pod "calico-typha-7c495b57bd-w2xhn" (UID: "596d77c2-5188-4e2e-b568-9bd07573d99a") : failed to sync configmap cache: timed out waiting for the condition Feb 13 16:05:51.705683 kubelet[3418]: E0213 16:05:51.705150 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.705683 kubelet[3418]: W0213 16:05:51.705181 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.705683 kubelet[3418]: E0213 16:05:51.705214 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.706724 kubelet[3418]: E0213 16:05:51.706672 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.707026 kubelet[3418]: W0213 16:05:51.706907 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.707026 kubelet[3418]: E0213 16:05:51.706959 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.708031 kubelet[3418]: E0213 16:05:51.707928 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.708031 kubelet[3418]: W0213 16:05:51.708016 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.708233 kubelet[3418]: E0213 16:05:51.708050 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.708943 kubelet[3418]: E0213 16:05:51.708680 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.708943 kubelet[3418]: W0213 16:05:51.708714 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.708943 kubelet[3418]: E0213 16:05:51.708782 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.709579 kubelet[3418]: E0213 16:05:51.709551 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.709639 kubelet[3418]: W0213 16:05:51.709577 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.709639 kubelet[3418]: E0213 16:05:51.709605 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.730070 kubelet[3418]: E0213 16:05:51.730029 3418 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 16:05:51.730515 kubelet[3418]: E0213 16:05:51.730268 3418 projected.go:194] Error preparing data for projected volume kube-api-access-x86wn for pod calico-system/calico-typha-7c495b57bd-w2xhn: failed to sync configmap cache: timed out waiting for the condition Feb 13 16:05:51.730515 kubelet[3418]: E0213 16:05:51.730439 3418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/596d77c2-5188-4e2e-b568-9bd07573d99a-kube-api-access-x86wn podName:596d77c2-5188-4e2e-b568-9bd07573d99a nodeName:}" failed. No retries permitted until 2025-02-13 16:05:52.230386704 +0000 UTC m=+15.955606514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x86wn" (UniqueName: "kubernetes.io/projected/596d77c2-5188-4e2e-b568-9bd07573d99a-kube-api-access-x86wn") pod "calico-typha-7c495b57bd-w2xhn" (UID: "596d77c2-5188-4e2e-b568-9bd07573d99a") : failed to sync configmap cache: timed out waiting for the condition Feb 13 16:05:51.812178 kubelet[3418]: E0213 16:05:51.811724 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.812178 kubelet[3418]: W0213 16:05:51.811756 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.812178 kubelet[3418]: E0213 16:05:51.811786 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.813293 kubelet[3418]: E0213 16:05:51.812835 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.813293 kubelet[3418]: W0213 16:05:51.812896 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.813293 kubelet[3418]: E0213 16:05:51.812937 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.814022 kubelet[3418]: E0213 16:05:51.813849 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.814022 kubelet[3418]: W0213 16:05:51.813871 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.814022 kubelet[3418]: E0213 16:05:51.813894 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.815061 kubelet[3418]: E0213 16:05:51.814834 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.815061 kubelet[3418]: W0213 16:05:51.814865 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.815061 kubelet[3418]: E0213 16:05:51.814897 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.815774 kubelet[3418]: E0213 16:05:51.815699 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.815774 kubelet[3418]: W0213 16:05:51.815721 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.816199 kubelet[3418]: E0213 16:05:51.815742 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.817068 kubelet[3418]: E0213 16:05:51.816869 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.817068 kubelet[3418]: W0213 16:05:51.816917 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.817068 kubelet[3418]: E0213 16:05:51.816949 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.866065 kubelet[3418]: E0213 16:05:51.865915 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.866065 kubelet[3418]: W0213 16:05:51.866014 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.866065 kubelet[3418]: E0213 16:05:51.866054 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.886358 kubelet[3418]: E0213 16:05:51.886185 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.886358 kubelet[3418]: W0213 16:05:51.886220 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.886358 kubelet[3418]: E0213 16:05:51.886253 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.892231 kubelet[3418]: E0213 16:05:51.891242 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.892642 kubelet[3418]: W0213 16:05:51.892484 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.892642 kubelet[3418]: E0213 16:05:51.892548 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.918078 kubelet[3418]: E0213 16:05:51.918033 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.918078 kubelet[3418]: W0213 16:05:51.918070 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.918306 kubelet[3418]: E0213 16:05:51.918101 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.918471 kubelet[3418]: E0213 16:05:51.918444 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.918548 kubelet[3418]: W0213 16:05:51.918470 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.918548 kubelet[3418]: E0213 16:05:51.918493 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:51.918919 kubelet[3418]: E0213 16:05:51.918890 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:51.919019 kubelet[3418]: W0213 16:05:51.918918 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:51.919019 kubelet[3418]: E0213 16:05:51.918942 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.020216 kubelet[3418]: E0213 16:05:52.019934 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.020216 kubelet[3418]: W0213 16:05:52.020053 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.020216 kubelet[3418]: E0213 16:05:52.020094 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.021131 kubelet[3418]: E0213 16:05:52.020846 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.021131 kubelet[3418]: W0213 16:05:52.020870 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.021131 kubelet[3418]: E0213 16:05:52.020892 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.021509 kubelet[3418]: E0213 16:05:52.021413 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.021509 kubelet[3418]: W0213 16:05:52.021435 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.021509 kubelet[3418]: E0213 16:05:52.021457 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.123433 containerd[2027]: time="2025-02-13T16:05:52.123203094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fxbcc,Uid:84c4358a-bad6-4f54-b585-f66e4394a9a4,Namespace:calico-system,Attempt:0,}" Feb 13 16:05:52.125547 kubelet[3418]: E0213 16:05:52.125465 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.125547 kubelet[3418]: W0213 16:05:52.125497 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.126159 kubelet[3418]: E0213 16:05:52.125667 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.127071 kubelet[3418]: E0213 16:05:52.126865 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.127071 kubelet[3418]: W0213 16:05:52.126900 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.127071 kubelet[3418]: E0213 16:05:52.126928 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.127974 kubelet[3418]: E0213 16:05:52.127905 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.127974 kubelet[3418]: W0213 16:05:52.127936 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.127974 kubelet[3418]: E0213 16:05:52.128038 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.179227 containerd[2027]: time="2025-02-13T16:05:52.178756062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:52.179227 containerd[2027]: time="2025-02-13T16:05:52.178947042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:52.179227 containerd[2027]: time="2025-02-13T16:05:52.179033466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:52.179715 containerd[2027]: time="2025-02-13T16:05:52.179391954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:52.230055 kubelet[3418]: E0213 16:05:52.229779 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.230055 kubelet[3418]: W0213 16:05:52.229815 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.230055 kubelet[3418]: E0213 16:05:52.229847 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.232359 kubelet[3418]: E0213 16:05:52.230364 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.232359 kubelet[3418]: W0213 16:05:52.230397 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.232359 kubelet[3418]: E0213 16:05:52.230422 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.232359 kubelet[3418]: E0213 16:05:52.230880 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.232359 kubelet[3418]: W0213 16:05:52.230900 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.232359 kubelet[3418]: E0213 16:05:52.230956 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.232359 kubelet[3418]: E0213 16:05:52.231481 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.232359 kubelet[3418]: W0213 16:05:52.231516 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.232359 kubelet[3418]: E0213 16:05:52.231568 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.232359 kubelet[3418]: E0213 16:05:52.232050 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.232958 kubelet[3418]: W0213 16:05:52.232072 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.232958 kubelet[3418]: E0213 16:05:52.232144 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.232958 kubelet[3418]: E0213 16:05:52.232598 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.232958 kubelet[3418]: W0213 16:05:52.232618 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.232958 kubelet[3418]: E0213 16:05:52.232668 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.233262 kubelet[3418]: E0213 16:05:52.233102 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.233262 kubelet[3418]: W0213 16:05:52.233121 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.233262 kubelet[3418]: E0213 16:05:52.233171 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.233558 kubelet[3418]: E0213 16:05:52.233527 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.233558 kubelet[3418]: W0213 16:05:52.233553 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.233676 kubelet[3418]: E0213 16:05:52.233576 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.233939 kubelet[3418]: E0213 16:05:52.233912 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.234034 kubelet[3418]: W0213 16:05:52.233938 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.234034 kubelet[3418]: E0213 16:05:52.233960 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.235786 kubelet[3418]: E0213 16:05:52.235731 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.235786 kubelet[3418]: W0213 16:05:52.235767 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.235942 kubelet[3418]: E0213 16:05:52.235797 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.236206 kubelet[3418]: E0213 16:05:52.236167 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.236206 kubelet[3418]: W0213 16:05:52.236196 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.236344 kubelet[3418]: E0213 16:05:52.236222 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.236601 kubelet[3418]: E0213 16:05:52.236563 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.236601 kubelet[3418]: W0213 16:05:52.236590 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.236733 kubelet[3418]: E0213 16:05:52.236616 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.237076 kubelet[3418]: E0213 16:05:52.236983 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.237144 kubelet[3418]: W0213 16:05:52.237075 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.237144 kubelet[3418]: E0213 16:05:52.237099 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.237468 kubelet[3418]: E0213 16:05:52.237439 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.237468 kubelet[3418]: W0213 16:05:52.237465 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.237583 kubelet[3418]: E0213 16:05:52.237487 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.238444 systemd[1]: Started cri-containerd-27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935.scope - libcontainer container 27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935. Feb 13 16:05:52.256184 kubelet[3418]: E0213 16:05:52.254329 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.256184 kubelet[3418]: W0213 16:05:52.254413 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.256184 kubelet[3418]: E0213 16:05:52.255420 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.258385 kubelet[3418]: E0213 16:05:52.256820 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.258385 kubelet[3418]: W0213 16:05:52.257115 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.258385 kubelet[3418]: E0213 16:05:52.257155 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.258385 kubelet[3418]: E0213 16:05:52.257797 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.258385 kubelet[3418]: W0213 16:05:52.257876 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.258385 kubelet[3418]: E0213 16:05:52.257905 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.267965 kubelet[3418]: E0213 16:05:52.267898 3418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:05:52.267965 kubelet[3418]: W0213 16:05:52.267939 3418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:05:52.267965 kubelet[3418]: E0213 16:05:52.267972 3418 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:05:52.307524 containerd[2027]: time="2025-02-13T16:05:52.307344655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fxbcc,Uid:84c4358a-bad6-4f54-b585-f66e4394a9a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935\"" Feb 13 16:05:52.310885 containerd[2027]: time="2025-02-13T16:05:52.310442263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 16:05:52.384102 containerd[2027]: time="2025-02-13T16:05:52.383932615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c495b57bd-w2xhn,Uid:596d77c2-5188-4e2e-b568-9bd07573d99a,Namespace:calico-system,Attempt:0,}" Feb 13 16:05:52.427285 containerd[2027]: time="2025-02-13T16:05:52.426666259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:52.427285 containerd[2027]: time="2025-02-13T16:05:52.426941923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:52.427285 containerd[2027]: time="2025-02-13T16:05:52.427029655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:52.427285 containerd[2027]: time="2025-02-13T16:05:52.427231351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:52.459397 systemd[1]: Started cri-containerd-326d7407b9c991dd325ff644c3d3e9a73c9ee4fc3d58564ba9fd603dbeecd157.scope - libcontainer container 326d7407b9c991dd325ff644c3d3e9a73c9ee4fc3d58564ba9fd603dbeecd157. Feb 13 16:05:52.508506 kubelet[3418]: E0213 16:05:52.508369 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:05:52.549368 containerd[2027]: time="2025-02-13T16:05:52.549281156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c495b57bd-w2xhn,Uid:596d77c2-5188-4e2e-b568-9bd07573d99a,Namespace:calico-system,Attempt:0,} returns sandbox id \"326d7407b9c991dd325ff644c3d3e9a73c9ee4fc3d58564ba9fd603dbeecd157\"" Feb 13 16:05:54.023148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621435280.mount: Deactivated successfully. Feb 13 16:05:54.214038 containerd[2027]: time="2025-02-13T16:05:54.212809160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:54.216167 containerd[2027]: time="2025-02-13T16:05:54.216082016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 16:05:54.218371 containerd[2027]: time="2025-02-13T16:05:54.218290604Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:54.225459 containerd[2027]: time="2025-02-13T16:05:54.225341408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:54.227159 containerd[2027]: time="2025-02-13T16:05:54.226792364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.916286801s" Feb 13 16:05:54.227159 containerd[2027]: time="2025-02-13T16:05:54.226860416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 16:05:54.232321 containerd[2027]: time="2025-02-13T16:05:54.231791504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 16:05:54.235977 containerd[2027]: time="2025-02-13T16:05:54.235870976Z" level=info msg="CreateContainer within sandbox \"27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 16:05:54.283730 containerd[2027]: time="2025-02-13T16:05:54.282252993Z" level=info msg="CreateContainer within sandbox \"27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9\"" Feb 13 16:05:54.284727 containerd[2027]: time="2025-02-13T16:05:54.284336217Z" level=info msg="StartContainer for \"38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9\"" Feb 13 16:05:54.349352 systemd[1]: Started cri-containerd-38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9.scope - libcontainer container 38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9. Feb 13 16:05:54.417409 containerd[2027]: time="2025-02-13T16:05:54.417342801Z" level=info msg="StartContainer for \"38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9\" returns successfully" Feb 13 16:05:54.448428 systemd[1]: cri-containerd-38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9.scope: Deactivated successfully. Feb 13 16:05:54.501300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9-rootfs.mount: Deactivated successfully. Feb 13 16:05:54.514309 kubelet[3418]: E0213 16:05:54.513667 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:05:54.628900 containerd[2027]: time="2025-02-13T16:05:54.628677886Z" level=info msg="shim disconnected" id=38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9 namespace=k8s.io Feb 13 16:05:54.628900 containerd[2027]: time="2025-02-13T16:05:54.628769026Z" level=warning msg="cleaning up after shim disconnected" id=38181cdfb183fbb90251a5002db212a7faa072707f983e74515f10a734973ac9 namespace=k8s.io Feb 13 16:05:54.628900 containerd[2027]: time="2025-02-13T16:05:54.628806910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:56.512437 kubelet[3418]: E0213 16:05:56.512354 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:05:57.017407 containerd[2027]: time="2025-02-13T16:05:57.017196334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.018793 containerd[2027]: time="2025-02-13T16:05:57.018695014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Feb 13 16:05:57.019683 containerd[2027]: time="2025-02-13T16:05:57.019597954Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.023396 containerd[2027]: time="2025-02-13T16:05:57.023312554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.025057 containerd[2027]: time="2025-02-13T16:05:57.024843874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.792959838s" Feb 13 16:05:57.025057 containerd[2027]: time="2025-02-13T16:05:57.024902782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 16:05:57.029438 containerd[2027]: time="2025-02-13T16:05:57.028207102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 16:05:57.055798 containerd[2027]: time="2025-02-13T16:05:57.055725214Z" level=info msg="CreateContainer within sandbox \"326d7407b9c991dd325ff644c3d3e9a73c9ee4fc3d58564ba9fd603dbeecd157\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 16:05:57.079268 containerd[2027]: time="2025-02-13T16:05:57.079206575Z" level=info msg="CreateContainer within sandbox \"326d7407b9c991dd325ff644c3d3e9a73c9ee4fc3d58564ba9fd603dbeecd157\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe468540ae4037e2cb55a9407dc8dabea92f5160916185488c4808616e71df49\"" Feb 13 16:05:57.080984 containerd[2027]: time="2025-02-13T16:05:57.080928347Z" level=info msg="StartContainer for \"fe468540ae4037e2cb55a9407dc8dabea92f5160916185488c4808616e71df49\"" Feb 13 16:05:57.143320 systemd[1]: Started cri-containerd-fe468540ae4037e2cb55a9407dc8dabea92f5160916185488c4808616e71df49.scope - libcontainer container fe468540ae4037e2cb55a9407dc8dabea92f5160916185488c4808616e71df49. Feb 13 16:05:57.213856 containerd[2027]: time="2025-02-13T16:05:57.213781823Z" level=info msg="StartContainer for \"fe468540ae4037e2cb55a9407dc8dabea92f5160916185488c4808616e71df49\" returns successfully" Feb 13 16:05:57.746187 kubelet[3418]: I0213 16:05:57.746094 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c495b57bd-w2xhn" podStartSLOduration=3.271319672 podStartE2EDuration="7.746047466s" podCreationTimestamp="2025-02-13 16:05:50 +0000 UTC" firstStartedPulling="2025-02-13 16:05:52.552239048 +0000 UTC m=+16.277458846" lastFinishedPulling="2025-02-13 16:05:57.02696683 +0000 UTC m=+20.752186640" observedRunningTime="2025-02-13 16:05:57.74550485 +0000 UTC m=+21.470724684" watchObservedRunningTime="2025-02-13 16:05:57.746047466 +0000 UTC m=+21.471267264" Feb 13 16:05:58.508337 kubelet[3418]: E0213 16:05:58.508282 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:06:00.510054 kubelet[3418]: E0213 16:06:00.509343 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:06:02.298061 containerd[2027]: time="2025-02-13T16:06:02.297971860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:02.299612 containerd[2027]: time="2025-02-13T16:06:02.299551768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 16:06:02.300709 containerd[2027]: time="2025-02-13T16:06:02.300641452Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:02.305343 containerd[2027]: time="2025-02-13T16:06:02.305167757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:02.306943 containerd[2027]: time="2025-02-13T16:06:02.306876509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 5.278563291s" Feb 13 16:06:02.307334 containerd[2027]: time="2025-02-13T16:06:02.306939617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 16:06:02.313561 containerd[2027]: time="2025-02-13T16:06:02.313223417Z" level=info msg="CreateContainer within sandbox \"27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 16:06:02.337494 containerd[2027]: time="2025-02-13T16:06:02.337423913Z" level=info msg="CreateContainer within sandbox \"27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516\"" Feb 13 16:06:02.338432 containerd[2027]: time="2025-02-13T16:06:02.338320277Z" level=info msg="StartContainer for \"f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516\"" Feb 13 16:06:02.400352 systemd[1]: Started cri-containerd-f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516.scope - libcontainer container f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516. Feb 13 16:06:02.456674 containerd[2027]: time="2025-02-13T16:06:02.456505529Z" level=info msg="StartContainer for \"f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516\" returns successfully" Feb 13 16:06:02.508367 kubelet[3418]: E0213 16:06:02.507837 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:06:03.302398 containerd[2027]: time="2025-02-13T16:06:03.302163641Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:06:03.308341 systemd[1]: cri-containerd-f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516.scope: Deactivated successfully. Feb 13 16:06:03.356515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516-rootfs.mount: Deactivated successfully. Feb 13 16:06:03.417838 kubelet[3418]: I0213 16:06:03.417136 3418 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 16:06:03.508981 systemd[1]: Created slice kubepods-burstable-poda30bf926_3951_47c8_a0ab_9fac70d16aca.slice - libcontainer container kubepods-burstable-poda30bf926_3951_47c8_a0ab_9fac70d16aca.slice. Feb 13 16:06:03.540298 systemd[1]: Created slice kubepods-besteffort-poda192015d_967a_446f_82a0_27e3609df636.slice - libcontainer container kubepods-besteffort-poda192015d_967a_446f_82a0_27e3609df636.slice. Feb 13 16:06:03.559505 systemd[1]: Created slice kubepods-besteffort-podb196c8b9_8607_4d55_8a5e_bd31f38b0664.slice - libcontainer container kubepods-besteffort-podb196c8b9_8607_4d55_8a5e_bd31f38b0664.slice. Feb 13 16:06:03.577965 systemd[1]: Created slice kubepods-burstable-pod75b40a5f_a447_41dd_bf81_01b27d71b463.slice - libcontainer container kubepods-burstable-pod75b40a5f_a447_41dd_bf81_01b27d71b463.slice. Feb 13 16:06:03.598826 systemd[1]: Created slice kubepods-besteffort-podbdbd4dc3_7177_43ef_82e7_b7dee3c76ac1.slice - libcontainer container kubepods-besteffort-podbdbd4dc3_7177_43ef_82e7_b7dee3c76ac1.slice. Feb 13 16:06:03.622943 kubelet[3418]: I0213 16:06:03.622803 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cdbk\" (UniqueName: \"kubernetes.io/projected/a30bf926-3951-47c8-a0ab-9fac70d16aca-kube-api-access-7cdbk\") pod \"coredns-6f6b679f8f-mpmb9\" (UID: \"a30bf926-3951-47c8-a0ab-9fac70d16aca\") " pod="kube-system/coredns-6f6b679f8f-mpmb9" Feb 13 16:06:03.626063 kubelet[3418]: I0213 16:06:03.623129 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b196c8b9-8607-4d55-8a5e-bd31f38b0664-calico-apiserver-certs\") pod \"calico-apiserver-68484dd575-8k4kl\" (UID: \"b196c8b9-8607-4d55-8a5e-bd31f38b0664\") " pod="calico-apiserver/calico-apiserver-68484dd575-8k4kl" Feb 13 16:06:03.626063 kubelet[3418]: I0213 16:06:03.623380 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg2fb\" (UniqueName: \"kubernetes.io/projected/75b40a5f-a447-41dd-bf81-01b27d71b463-kube-api-access-mg2fb\") pod \"coredns-6f6b679f8f-jn895\" (UID: \"75b40a5f-a447-41dd-bf81-01b27d71b463\") " pod="kube-system/coredns-6f6b679f8f-jn895" Feb 13 16:06:03.626063 kubelet[3418]: I0213 16:06:03.624844 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7f9\" (UniqueName: \"kubernetes.io/projected/a192015d-967a-446f-82a0-27e3609df636-kube-api-access-5x7f9\") pod \"calico-apiserver-68484dd575-ncn9d\" (UID: \"a192015d-967a-446f-82a0-27e3609df636\") " pod="calico-apiserver/calico-apiserver-68484dd575-ncn9d" Feb 13 16:06:03.626063 kubelet[3418]: I0213 16:06:03.624922 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75b40a5f-a447-41dd-bf81-01b27d71b463-config-volume\") pod \"coredns-6f6b679f8f-jn895\" (UID: \"75b40a5f-a447-41dd-bf81-01b27d71b463\") " pod="kube-system/coredns-6f6b679f8f-jn895" Feb 13 16:06:03.626063 kubelet[3418]: I0213 16:06:03.625011 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a192015d-967a-446f-82a0-27e3609df636-calico-apiserver-certs\") pod \"calico-apiserver-68484dd575-ncn9d\" (UID: \"a192015d-967a-446f-82a0-27e3609df636\") " pod="calico-apiserver/calico-apiserver-68484dd575-ncn9d" Feb 13 16:06:03.626461 kubelet[3418]: I0213 16:06:03.625094 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a30bf926-3951-47c8-a0ab-9fac70d16aca-config-volume\") pod \"coredns-6f6b679f8f-mpmb9\" (UID: \"a30bf926-3951-47c8-a0ab-9fac70d16aca\") " pod="kube-system/coredns-6f6b679f8f-mpmb9" Feb 13 16:06:03.626461 kubelet[3418]: I0213 16:06:03.625155 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1-tigera-ca-bundle\") pod \"calico-kube-controllers-7fdf6d44f6-cv7cc\" (UID: \"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1\") " pod="calico-system/calico-kube-controllers-7fdf6d44f6-cv7cc" Feb 13 16:06:03.626461 kubelet[3418]: I0213 16:06:03.625213 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfc7l\" (UniqueName: \"kubernetes.io/projected/b196c8b9-8607-4d55-8a5e-bd31f38b0664-kube-api-access-jfc7l\") pod \"calico-apiserver-68484dd575-8k4kl\" (UID: \"b196c8b9-8607-4d55-8a5e-bd31f38b0664\") " pod="calico-apiserver/calico-apiserver-68484dd575-8k4kl" Feb 13 16:06:03.626461 kubelet[3418]: I0213 16:06:03.625264 3418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllzx\" (UniqueName: \"kubernetes.io/projected/bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1-kube-api-access-wllzx\") pod \"calico-kube-controllers-7fdf6d44f6-cv7cc\" (UID: \"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1\") " pod="calico-system/calico-kube-controllers-7fdf6d44f6-cv7cc" Feb 13 16:06:03.828350 containerd[2027]: time="2025-02-13T16:06:03.828153380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mpmb9,Uid:a30bf926-3951-47c8-a0ab-9fac70d16aca,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:03.851164 containerd[2027]: time="2025-02-13T16:06:03.851077196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-ncn9d,Uid:a192015d-967a-446f-82a0-27e3609df636,Namespace:calico-apiserver,Attempt:0,}" Feb 13 16:06:03.870488 containerd[2027]: time="2025-02-13T16:06:03.870420692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-8k4kl,Uid:b196c8b9-8607-4d55-8a5e-bd31f38b0664,Namespace:calico-apiserver,Attempt:0,}" Feb 13 16:06:03.889051 containerd[2027]: time="2025-02-13T16:06:03.888887972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn895,Uid:75b40a5f-a447-41dd-bf81-01b27d71b463,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:03.906692 containerd[2027]: time="2025-02-13T16:06:03.906639284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fdf6d44f6-cv7cc,Uid:bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1,Namespace:calico-system,Attempt:0,}" Feb 13 16:06:04.525320 systemd[1]: Created slice kubepods-besteffort-poda33f313d_041d_426a_9914_5ac8d0c42820.slice - libcontainer container kubepods-besteffort-poda33f313d_041d_426a_9914_5ac8d0c42820.slice. Feb 13 16:06:04.531543 containerd[2027]: time="2025-02-13T16:06:04.531416228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4668,Uid:a33f313d-041d-426a-9914-5ac8d0c42820,Namespace:calico-system,Attempt:0,}" Feb 13 16:06:04.729576 containerd[2027]: time="2025-02-13T16:06:04.729176289Z" level=info msg="shim disconnected" id=f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516 namespace=k8s.io Feb 13 16:06:04.729576 containerd[2027]: time="2025-02-13T16:06:04.729282381Z" level=warning msg="cleaning up after shim disconnected" id=f63b15ddc312d20b8ddee5a135a2238062bb2a3f423273f61625e9e57c18b516 namespace=k8s.io Feb 13 16:06:04.729576 containerd[2027]: time="2025-02-13T16:06:04.729303777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:06:04.862252 containerd[2027]: time="2025-02-13T16:06:04.861576213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 16:06:05.210928 containerd[2027]: time="2025-02-13T16:06:05.210684211Z" level=error msg="Failed to destroy network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.211885 containerd[2027]: time="2025-02-13T16:06:05.211713403Z" level=error msg="encountered an error cleaning up failed sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.211885 containerd[2027]: time="2025-02-13T16:06:05.211807435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fdf6d44f6-cv7cc,Uid:bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.213086 kubelet[3418]: E0213 16:06:05.212483 3418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.213086 kubelet[3418]: E0213 16:06:05.212579 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fdf6d44f6-cv7cc" Feb 13 16:06:05.213086 kubelet[3418]: E0213 16:06:05.212615 3418 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fdf6d44f6-cv7cc" Feb 13 16:06:05.214744 containerd[2027]: time="2025-02-13T16:06:05.212571283Z" level=error msg="Failed to destroy network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.214826 kubelet[3418]: E0213 16:06:05.212730 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fdf6d44f6-cv7cc_calico-system(bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fdf6d44f6-cv7cc_calico-system(bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fdf6d44f6-cv7cc" podUID="bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1" Feb 13 16:06:05.217517 containerd[2027]: time="2025-02-13T16:06:05.217433119Z" level=error msg="encountered an error cleaning up failed sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.217895 containerd[2027]: time="2025-02-13T16:06:05.217547983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mpmb9,Uid:a30bf926-3951-47c8-a0ab-9fac70d16aca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.218266 kubelet[3418]: E0213 16:06:05.217840 3418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.219184 kubelet[3418]: E0213 16:06:05.218876 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mpmb9" Feb 13 16:06:05.219184 kubelet[3418]: E0213 16:06:05.218923 3418 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mpmb9" Feb 13 16:06:05.219184 kubelet[3418]: E0213 16:06:05.219006 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-mpmb9_kube-system(a30bf926-3951-47c8-a0ab-9fac70d16aca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-mpmb9_kube-system(a30bf926-3951-47c8-a0ab-9fac70d16aca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mpmb9" podUID="a30bf926-3951-47c8-a0ab-9fac70d16aca" Feb 13 16:06:05.264713 containerd[2027]: time="2025-02-13T16:06:05.264399799Z" level=error msg="Failed to destroy network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.265871 containerd[2027]: time="2025-02-13T16:06:05.265481227Z" level=error msg="Failed to destroy network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.267468 containerd[2027]: time="2025-02-13T16:06:05.267091351Z" level=error msg="encountered an error cleaning up failed sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.267468 containerd[2027]: time="2025-02-13T16:06:05.267198775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn895,Uid:75b40a5f-a447-41dd-bf81-01b27d71b463,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.267468 containerd[2027]: time="2025-02-13T16:06:05.267117163Z" level=error msg="encountered an error cleaning up failed sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.267468 containerd[2027]: time="2025-02-13T16:06:05.267366859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-8k4kl,Uid:b196c8b9-8607-4d55-8a5e-bd31f38b0664,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.270245 kubelet[3418]: E0213 16:06:05.268238 3418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.270245 kubelet[3418]: E0213 16:06:05.268324 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68484dd575-8k4kl" Feb 13 16:06:05.270245 kubelet[3418]: E0213 16:06:05.268358 3418 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68484dd575-8k4kl" Feb 13 16:06:05.270563 kubelet[3418]: E0213 16:06:05.268420 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68484dd575-8k4kl_calico-apiserver(b196c8b9-8607-4d55-8a5e-bd31f38b0664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68484dd575-8k4kl_calico-apiserver(b196c8b9-8607-4d55-8a5e-bd31f38b0664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68484dd575-8k4kl" podUID="b196c8b9-8607-4d55-8a5e-bd31f38b0664" Feb 13 16:06:05.270563 kubelet[3418]: E0213 16:06:05.268952 3418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.270563 kubelet[3418]: E0213 16:06:05.269061 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jn895" Feb 13 16:06:05.270878 kubelet[3418]: E0213 16:06:05.269098 3418 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jn895" Feb 13 16:06:05.270878 kubelet[3418]: E0213 16:06:05.269252 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jn895_kube-system(75b40a5f-a447-41dd-bf81-01b27d71b463)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jn895_kube-system(75b40a5f-a447-41dd-bf81-01b27d71b463)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jn895" podUID="75b40a5f-a447-41dd-bf81-01b27d71b463" Feb 13 16:06:05.289042 containerd[2027]: time="2025-02-13T16:06:05.287985055Z" level=error msg="Failed to destroy network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.289042 containerd[2027]: time="2025-02-13T16:06:05.288918511Z" level=error msg="encountered an error cleaning up failed sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.289277 containerd[2027]: time="2025-02-13T16:06:05.289074391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-ncn9d,Uid:a192015d-967a-446f-82a0-27e3609df636,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.290459 kubelet[3418]: E0213 16:06:05.289571 3418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.290459 kubelet[3418]: E0213 16:06:05.289680 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68484dd575-ncn9d" Feb 13 16:06:05.290459 kubelet[3418]: E0213 16:06:05.289722 3418 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68484dd575-ncn9d" Feb 13 16:06:05.290822 kubelet[3418]: E0213 16:06:05.289826 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68484dd575-ncn9d_calico-apiserver(a192015d-967a-446f-82a0-27e3609df636)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68484dd575-ncn9d_calico-apiserver(a192015d-967a-446f-82a0-27e3609df636)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68484dd575-ncn9d" podUID="a192015d-967a-446f-82a0-27e3609df636" Feb 13 16:06:05.300864 containerd[2027]: time="2025-02-13T16:06:05.300787003Z" level=error msg="Failed to destroy network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.301704 containerd[2027]: time="2025-02-13T16:06:05.301611511Z" level=error msg="encountered an error cleaning up failed sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.301794 containerd[2027]: time="2025-02-13T16:06:05.301761055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4668,Uid:a33f313d-041d-426a-9914-5ac8d0c42820,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.302317 kubelet[3418]: E0213 16:06:05.302145 3418 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:05.302317 kubelet[3418]: E0213 16:06:05.302236 3418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4668" Feb 13 16:06:05.302317 kubelet[3418]: E0213 16:06:05.302269 3418 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4668" Feb 13 16:06:05.303141 kubelet[3418]: E0213 16:06:05.302600 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t4668_calico-system(a33f313d-041d-426a-9914-5ac8d0c42820)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t4668_calico-system(a33f313d-041d-426a-9914-5ac8d0c42820)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:06:05.783663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996-shm.mount: Deactivated successfully. Feb 13 16:06:05.783847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7-shm.mount: Deactivated successfully. Feb 13 16:06:05.784047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc-shm.mount: Deactivated successfully. Feb 13 16:06:05.784190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755-shm.mount: Deactivated successfully. Feb 13 16:06:05.860700 kubelet[3418]: I0213 16:06:05.859972 3418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:05.862791 containerd[2027]: time="2025-02-13T16:06:05.861284098Z" level=info msg="StopPodSandbox for \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\"" Feb 13 16:06:05.862791 containerd[2027]: time="2025-02-13T16:06:05.862155058Z" level=info msg="Ensure that sandbox 201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24 in task-service has been cleanup successfully" Feb 13 16:06:05.870154 kubelet[3418]: I0213 16:06:05.869823 3418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:05.873706 containerd[2027]: time="2025-02-13T16:06:05.873603298Z" level=info msg="StopPodSandbox for \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\"" Feb 13 16:06:05.874531 containerd[2027]: time="2025-02-13T16:06:05.874425718Z" level=info msg="Ensure that sandbox 2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755 in task-service has been cleanup successfully" Feb 13 16:06:05.887558 kubelet[3418]: I0213 16:06:05.887512 3418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:05.898007 containerd[2027]: time="2025-02-13T16:06:05.897847150Z" level=info msg="StopPodSandbox for \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\"" Feb 13 16:06:05.898858 containerd[2027]: time="2025-02-13T16:06:05.898710274Z" level=info msg="Ensure that sandbox 248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996 in task-service has been cleanup successfully" Feb 13 16:06:05.901290 kubelet[3418]: I0213 16:06:05.901015 3418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:05.903585 containerd[2027]: time="2025-02-13T16:06:05.902909050Z" level=info msg="StopPodSandbox for \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\"" Feb 13 16:06:05.904087 containerd[2027]: time="2025-02-13T16:06:05.903849154Z" level=info msg="Ensure that sandbox 15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7 in task-service has been cleanup successfully" Feb 13 16:06:05.911940 kubelet[3418]: I0213 16:06:05.911884 3418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:05.916333 containerd[2027]: time="2025-02-13T16:06:05.916251838Z" level=info msg="StopPodSandbox for \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\"" Feb 13 16:06:05.916724 containerd[2027]: time="2025-02-13T16:06:05.916641094Z" level=info msg="Ensure that sandbox 317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc in task-service has been cleanup successfully" Feb 13 16:06:05.922892 kubelet[3418]: I0213 16:06:05.922733 3418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:05.925773 containerd[2027]: time="2025-02-13T16:06:05.925285030Z" level=info msg="StopPodSandbox for \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\"" Feb 13 16:06:05.930127 containerd[2027]: time="2025-02-13T16:06:05.928820303Z" level=info msg="Ensure that sandbox e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291 in task-service has been cleanup successfully" Feb 13 16:06:06.119263 containerd[2027]: time="2025-02-13T16:06:06.118904743Z" level=error msg="StopPodSandbox for \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\" failed" error="failed to destroy network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:06.119672 kubelet[3418]: E0213 16:06:06.119564 3418 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:06.119854 kubelet[3418]: E0213 16:06:06.119685 3418 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755"} Feb 13 16:06:06.119854 kubelet[3418]: E0213 16:06:06.119815 3418 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75b40a5f-a447-41dd-bf81-01b27d71b463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:06.121660 kubelet[3418]: E0213 16:06:06.119858 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75b40a5f-a447-41dd-bf81-01b27d71b463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jn895" podUID="75b40a5f-a447-41dd-bf81-01b27d71b463" Feb 13 16:06:06.129042 containerd[2027]: time="2025-02-13T16:06:06.128508116Z" level=error msg="StopPodSandbox for \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\" failed" error="failed to destroy network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:06.129235 kubelet[3418]: E0213 16:06:06.128915 3418 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:06.130561 kubelet[3418]: E0213 16:06:06.128985 3418 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996"} Feb 13 16:06:06.130561 kubelet[3418]: E0213 16:06:06.129508 3418 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a192015d-967a-446f-82a0-27e3609df636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:06.130561 kubelet[3418]: E0213 16:06:06.129569 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a192015d-967a-446f-82a0-27e3609df636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68484dd575-ncn9d" podUID="a192015d-967a-446f-82a0-27e3609df636" Feb 13 16:06:06.131299 containerd[2027]: time="2025-02-13T16:06:06.131234564Z" level=error msg="StopPodSandbox for \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\" failed" error="failed to destroy network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:06.132217 kubelet[3418]: E0213 16:06:06.132098 3418 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:06.132403 kubelet[3418]: E0213 16:06:06.132255 3418 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24"} Feb 13 16:06:06.132403 kubelet[3418]: E0213 16:06:06.132351 3418 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:06.132569 kubelet[3418]: E0213 16:06:06.132422 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fdf6d44f6-cv7cc" podUID="bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1" Feb 13 16:06:06.136573 containerd[2027]: time="2025-02-13T16:06:06.136386668Z" level=error msg="StopPodSandbox for \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\" failed" error="failed to destroy network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:06.136881 kubelet[3418]: E0213 16:06:06.136759 3418 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:06.136881 kubelet[3418]: E0213 16:06:06.136840 3418 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291"} Feb 13 16:06:06.137069 kubelet[3418]: E0213 16:06:06.136895 3418 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a33f313d-041d-426a-9914-5ac8d0c42820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:06.137069 kubelet[3418]: E0213 16:06:06.136934 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a33f313d-041d-426a-9914-5ac8d0c42820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t4668" podUID="a33f313d-041d-426a-9914-5ac8d0c42820" Feb 13 16:06:06.146627 containerd[2027]: time="2025-02-13T16:06:06.146377688Z" level=error msg="StopPodSandbox for \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\" failed" error="failed to destroy network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:06.147742 kubelet[3418]: E0213 16:06:06.147522 3418 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:06.147742 kubelet[3418]: E0213 16:06:06.147698 3418 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7"} Feb 13 16:06:06.147974 kubelet[3418]: E0213 16:06:06.147790 3418 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b196c8b9-8607-4d55-8a5e-bd31f38b0664\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:06.147974 kubelet[3418]: E0213 16:06:06.147865 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b196c8b9-8607-4d55-8a5e-bd31f38b0664\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68484dd575-8k4kl" podUID="b196c8b9-8607-4d55-8a5e-bd31f38b0664" Feb 13 16:06:06.156387 containerd[2027]: time="2025-02-13T16:06:06.156267128Z" level=error msg="StopPodSandbox for \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\" failed" error="failed to destroy network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:06.156810 kubelet[3418]: E0213 16:06:06.156712 3418 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:06.156810 kubelet[3418]: E0213 16:06:06.156791 3418 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc"} Feb 13 16:06:06.157179 kubelet[3418]: E0213 16:06:06.156855 3418 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a30bf926-3951-47c8-a0ab-9fac70d16aca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:06.157179 kubelet[3418]: E0213 16:06:06.156903 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a30bf926-3951-47c8-a0ab-9fac70d16aca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mpmb9" podUID="a30bf926-3951-47c8-a0ab-9fac70d16aca" Feb 13 16:06:13.827741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002272553.mount: Deactivated successfully. Feb 13 16:06:13.904876 containerd[2027]: time="2025-02-13T16:06:13.904746222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:13.906777 containerd[2027]: time="2025-02-13T16:06:13.906640554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 16:06:13.907882 containerd[2027]: time="2025-02-13T16:06:13.907769658Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:13.912085 containerd[2027]: time="2025-02-13T16:06:13.911925606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:13.913916 containerd[2027]: time="2025-02-13T16:06:13.913611522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 9.050637177s" Feb 13 16:06:13.913916 containerd[2027]: time="2025-02-13T16:06:13.913702590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 16:06:13.954886 containerd[2027]: time="2025-02-13T16:06:13.954783810Z" level=info msg="CreateContainer within sandbox \"27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 16:06:13.981529 containerd[2027]: time="2025-02-13T16:06:13.981368167Z" level=info msg="CreateContainer within sandbox \"27d09e393258697a9d85080735a3fcd5c96009b19a2d22c57a83e9682ea37935\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"67efb7d7ef3b180091b3ab8afb12a3f47cf9bb42c4f1f24a3390445251364261\"" Feb 13 16:06:13.984088 containerd[2027]: time="2025-02-13T16:06:13.982620511Z" level=info msg="StartContainer for \"67efb7d7ef3b180091b3ab8afb12a3f47cf9bb42c4f1f24a3390445251364261\"" Feb 13 16:06:14.044487 systemd[1]: Started cri-containerd-67efb7d7ef3b180091b3ab8afb12a3f47cf9bb42c4f1f24a3390445251364261.scope - libcontainer container 67efb7d7ef3b180091b3ab8afb12a3f47cf9bb42c4f1f24a3390445251364261. Feb 13 16:06:14.119356 containerd[2027]: time="2025-02-13T16:06:14.119197995Z" level=info msg="StartContainer for \"67efb7d7ef3b180091b3ab8afb12a3f47cf9bb42c4f1f24a3390445251364261\" returns successfully" Feb 13 16:06:14.266271 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 16:06:14.266458 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 16:06:15.000769 kubelet[3418]: I0213 16:06:15.000645 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fxbcc" podStartSLOduration=3.394598037 podStartE2EDuration="25.000619012s" podCreationTimestamp="2025-02-13 16:05:50 +0000 UTC" firstStartedPulling="2025-02-13 16:05:52.309794479 +0000 UTC m=+16.035014289" lastFinishedPulling="2025-02-13 16:06:13.915815454 +0000 UTC m=+37.641035264" observedRunningTime="2025-02-13 16:06:14.99854822 +0000 UTC m=+38.723768258" watchObservedRunningTime="2025-02-13 16:06:15.000619012 +0000 UTC m=+38.725838858" Feb 13 16:06:16.533106 kernel: bpftool[4711]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 16:06:16.908831 (udev-worker)[4568]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:16.912565 systemd-networkd[1859]: vxlan.calico: Link UP Feb 13 16:06:16.912574 systemd-networkd[1859]: vxlan.calico: Gained carrier Feb 13 16:06:16.992678 (udev-worker)[4567]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:17.508962 containerd[2027]: time="2025-02-13T16:06:17.508729844Z" level=info msg="StopPodSandbox for \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\"" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.662 [INFO][4792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.665 [INFO][4792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" iface="eth0" netns="/var/run/netns/cni-056709bb-63d0-179a-13d6-dadf1e47b08f" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.669 [INFO][4792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" iface="eth0" netns="/var/run/netns/cni-056709bb-63d0-179a-13d6-dadf1e47b08f" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.673 [INFO][4792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" iface="eth0" netns="/var/run/netns/cni-056709bb-63d0-179a-13d6-dadf1e47b08f" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.673 [INFO][4792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.673 [INFO][4792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.742 [INFO][4798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.743 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.743 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.767 [WARNING][4798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.767 [INFO][4798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.773 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:17.786660 containerd[2027]: 2025-02-13 16:06:17.781 [INFO][4792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:17.793621 containerd[2027]: time="2025-02-13T16:06:17.787667541Z" level=info msg="TearDown network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\" successfully" Feb 13 16:06:17.793621 containerd[2027]: time="2025-02-13T16:06:17.787756089Z" level=info msg="StopPodSandbox for \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\" returns successfully" Feb 13 16:06:17.793621 containerd[2027]: time="2025-02-13T16:06:17.792321873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mpmb9,Uid:a30bf926-3951-47c8-a0ab-9fac70d16aca,Namespace:kube-system,Attempt:1,}" Feb 13 16:06:17.806447 systemd[1]: run-netns-cni\x2d056709bb\x2d63d0\x2d179a\x2d13d6\x2ddadf1e47b08f.mount: Deactivated successfully. Feb 13 16:06:18.093848 systemd-networkd[1859]: vxlan.calico: Gained IPv6LL Feb 13 16:06:18.156964 systemd-networkd[1859]: calia030ea3496b: Link UP Feb 13 16:06:18.157653 systemd-networkd[1859]: calia030ea3496b: Gained carrier Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:17.969 [INFO][4805] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0 coredns-6f6b679f8f- kube-system a30bf926-3951-47c8-a0ab-9fac70d16aca 782 0 2025-02-13 16:05:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-154 coredns-6f6b679f8f-mpmb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia030ea3496b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:17.969 [INFO][4805] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.049 [INFO][4818] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" HandleID="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.077 [INFO][4818] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" HandleID="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a1880), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-154", "pod":"coredns-6f6b679f8f-mpmb9", "timestamp":"2025-02-13 16:06:18.049342519 +0000 UTC"}, Hostname:"ip-172-31-31-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.077 [INFO][4818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.077 [INFO][4818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.077 [INFO][4818] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-154' Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.081 [INFO][4818] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.089 [INFO][4818] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.107 [INFO][4818] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.110 [INFO][4818] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.115 [INFO][4818] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.115 [INFO][4818] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.119 [INFO][4818] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.129 [INFO][4818] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.140 [INFO][4818] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.65/26] block=192.168.94.64/26 handle="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.141 [INFO][4818] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.65/26] handle="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" host="ip-172-31-31-154" Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.141 [INFO][4818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:18.199104 containerd[2027]: 2025-02-13 16:06:18.141 [INFO][4818] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.65/26] IPv6=[] ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" HandleID="k8s-pod-network.87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:18.200566 containerd[2027]: 2025-02-13 16:06:18.145 [INFO][4805] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a30bf926-3951-47c8-a0ab-9fac70d16aca", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"", Pod:"coredns-6f6b679f8f-mpmb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia030ea3496b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:18.200566 containerd[2027]: 2025-02-13 16:06:18.146 [INFO][4805] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.65/32] ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:18.200566 containerd[2027]: 2025-02-13 16:06:18.146 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia030ea3496b ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:18.200566 containerd[2027]: 2025-02-13 16:06:18.159 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:18.200566 containerd[2027]: 2025-02-13 16:06:18.161 [INFO][4805] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a30bf926-3951-47c8-a0ab-9fac70d16aca", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f", Pod:"coredns-6f6b679f8f-mpmb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia030ea3496b", MAC:"96:d9:40:a1:07:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:18.200566 containerd[2027]: 2025-02-13 16:06:18.194 [INFO][4805] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f" Namespace="kube-system" Pod="coredns-6f6b679f8f-mpmb9" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:18.288118 containerd[2027]: time="2025-02-13T16:06:18.286057916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:18.288118 containerd[2027]: time="2025-02-13T16:06:18.286182080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:18.288118 containerd[2027]: time="2025-02-13T16:06:18.286210688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:18.288118 containerd[2027]: time="2025-02-13T16:06:18.286440884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:18.370251 systemd[1]: Started cri-containerd-87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f.scope - libcontainer container 87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f. Feb 13 16:06:18.468766 containerd[2027]: time="2025-02-13T16:06:18.468595665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mpmb9,Uid:a30bf926-3951-47c8-a0ab-9fac70d16aca,Namespace:kube-system,Attempt:1,} returns sandbox id \"87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f\"" Feb 13 16:06:18.481013 containerd[2027]: time="2025-02-13T16:06:18.480493257Z" level=info msg="CreateContainer within sandbox \"87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:18.513629 containerd[2027]: time="2025-02-13T16:06:18.513540321Z" level=info msg="StopPodSandbox for \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\"" Feb 13 16:06:18.519924 containerd[2027]: time="2025-02-13T16:06:18.519388785Z" level=info msg="StopPodSandbox for \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\"" Feb 13 16:06:18.548675 containerd[2027]: time="2025-02-13T16:06:18.548463345Z" level=info msg="CreateContainer within sandbox \"87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9d1330980843fdd6edcdd2dffd318ec462032f06f8768019f0bd593c1e90467\"" Feb 13 16:06:18.557849 containerd[2027]: time="2025-02-13T16:06:18.555570057Z" level=info msg="StartContainer for \"a9d1330980843fdd6edcdd2dffd318ec462032f06f8768019f0bd593c1e90467\"" Feb 13 16:06:18.735202 systemd[1]: Started sshd@9-172.31.31.154:22-139.178.68.195:52724.service - OpenSSH per-connection server daemon (139.178.68.195:52724). Feb 13 16:06:18.890440 systemd[1]: Started cri-containerd-a9d1330980843fdd6edcdd2dffd318ec462032f06f8768019f0bd593c1e90467.scope - libcontainer container a9d1330980843fdd6edcdd2dffd318ec462032f06f8768019f0bd593c1e90467. Feb 13 16:06:18.964063 sshd[4923]: Accepted publickey for core from 139.178.68.195 port 52724 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:18.970778 sshd[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:18.990215 systemd-logind[2001]: New session 10 of user core. Feb 13 16:06:19.028547 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:06:19.128097 containerd[2027]: time="2025-02-13T16:06:19.127566332Z" level=info msg="StartContainer for \"a9d1330980843fdd6edcdd2dffd318ec462032f06f8768019f0bd593c1e90467\" returns successfully" Feb 13 16:06:19.373945 systemd-networkd[1859]: calia030ea3496b: Gained IPv6LL Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.170 [INFO][4899] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.171 [INFO][4899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" iface="eth0" netns="/var/run/netns/cni-9b6d1e3c-0861-da5a-9dd1-07f10185553d" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.172 [INFO][4899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" iface="eth0" netns="/var/run/netns/cni-9b6d1e3c-0861-da5a-9dd1-07f10185553d" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.173 [INFO][4899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" iface="eth0" netns="/var/run/netns/cni-9b6d1e3c-0861-da5a-9dd1-07f10185553d" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.173 [INFO][4899] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.173 [INFO][4899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.303 [INFO][4962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.306 [INFO][4962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.306 [INFO][4962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.386 [WARNING][4962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.389 [INFO][4962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.441 [INFO][4962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:19.456852 containerd[2027]: 2025-02-13 16:06:19.448 [INFO][4899] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:19.461441 containerd[2027]: time="2025-02-13T16:06:19.461133814Z" level=info msg="TearDown network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\" successfully" Feb 13 16:06:19.461441 containerd[2027]: time="2025-02-13T16:06:19.461201938Z" level=info msg="StopPodSandbox for \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\" returns successfully" Feb 13 16:06:19.465337 containerd[2027]: time="2025-02-13T16:06:19.464679826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fdf6d44f6-cv7cc,Uid:bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1,Namespace:calico-system,Attempt:1,}" Feb 13 16:06:19.469549 systemd[1]: run-netns-cni\x2d9b6d1e3c\x2d0861\x2dda5a\x2d9dd1\x2d07f10185553d.mount: Deactivated successfully. Feb 13 16:06:19.513794 containerd[2027]: time="2025-02-13T16:06:19.513534886Z" level=info msg="StopPodSandbox for \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\"" Feb 13 16:06:19.521063 containerd[2027]: time="2025-02-13T16:06:19.517306846Z" level=info msg="StopPodSandbox for \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\"" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.215 [INFO][4910] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.215 [INFO][4910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" iface="eth0" netns="/var/run/netns/cni-7f320105-4101-1fcf-3498-57c62a20f180" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.216 [INFO][4910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" iface="eth0" netns="/var/run/netns/cni-7f320105-4101-1fcf-3498-57c62a20f180" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.219 [INFO][4910] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" iface="eth0" netns="/var/run/netns/cni-7f320105-4101-1fcf-3498-57c62a20f180" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.219 [INFO][4910] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.219 [INFO][4910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.425 [INFO][4968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.427 [INFO][4968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.441 [INFO][4968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.575 [WARNING][4968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.575 [INFO][4968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.627 [INFO][4968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:19.674199 containerd[2027]: 2025-02-13 16:06:19.652 [INFO][4910] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:19.680117 containerd[2027]: time="2025-02-13T16:06:19.679542107Z" level=info msg="TearDown network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\" successfully" Feb 13 16:06:19.680117 containerd[2027]: time="2025-02-13T16:06:19.679639679Z" level=info msg="StopPodSandbox for \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\" returns successfully" Feb 13 16:06:19.683040 containerd[2027]: time="2025-02-13T16:06:19.681675371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn895,Uid:75b40a5f-a447-41dd-bf81-01b27d71b463,Namespace:kube-system,Attempt:1,}" Feb 13 16:06:19.739942 sshd[4923]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:19.756373 systemd[1]: sshd@9-172.31.31.154:22-139.178.68.195:52724.service: Deactivated successfully. Feb 13 16:06:19.766614 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:06:19.783361 systemd-logind[2001]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:06:19.812959 systemd[1]: run-netns-cni\x2d7f320105\x2d4101\x2d1fcf\x2d3498\x2d57c62a20f180.mount: Deactivated successfully. Feb 13 16:06:19.825061 systemd-logind[2001]: Removed session 10. Feb 13 16:06:20.229905 kubelet[3418]: I0213 16:06:20.229729 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mpmb9" podStartSLOduration=38.22969423 podStartE2EDuration="38.22969423s" podCreationTimestamp="2025-02-13 16:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:20.128655837 +0000 UTC m=+43.853875683" watchObservedRunningTime="2025-02-13 16:06:20.22969423 +0000 UTC m=+43.954914040" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:19.973 [INFO][5020] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:19.976 [INFO][5020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" iface="eth0" netns="/var/run/netns/cni-7b073cec-5ac4-56bb-ed29-34927cee46ae" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:19.978 [INFO][5020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" iface="eth0" netns="/var/run/netns/cni-7b073cec-5ac4-56bb-ed29-34927cee46ae" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:19.979 [INFO][5020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" iface="eth0" netns="/var/run/netns/cni-7b073cec-5ac4-56bb-ed29-34927cee46ae" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:19.982 [INFO][5020] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:19.985 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:20.094 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:20.096 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:20.096 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:20.197 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:20.199 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:20.219 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:20.264172 containerd[2027]: 2025-02-13 16:06:20.232 [INFO][5020] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:20.280091 containerd[2027]: time="2025-02-13T16:06:20.279976558Z" level=info msg="TearDown network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\" successfully" Feb 13 16:06:20.282031 containerd[2027]: time="2025-02-13T16:06:20.281144038Z" level=info msg="StopPodSandbox for \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\" returns successfully" Feb 13 16:06:20.289338 systemd[1]: run-netns-cni\x2d7b073cec\x2d5ac4\x2d56bb\x2ded29\x2d34927cee46ae.mount: Deactivated successfully. Feb 13 16:06:20.297205 containerd[2027]: time="2025-02-13T16:06:20.292466146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-8k4kl,Uid:b196c8b9-8607-4d55-8a5e-bd31f38b0664,Namespace:calico-apiserver,Attempt:1,}" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.067 [INFO][5019] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.074 [INFO][5019] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" iface="eth0" netns="/var/run/netns/cni-c75a04b6-fd7b-d40e-ad2c-fe4775099de1" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.075 [INFO][5019] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" iface="eth0" netns="/var/run/netns/cni-c75a04b6-fd7b-d40e-ad2c-fe4775099de1" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.077 [INFO][5019] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" iface="eth0" netns="/var/run/netns/cni-c75a04b6-fd7b-d40e-ad2c-fe4775099de1" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.077 [INFO][5019] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.079 [INFO][5019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.324 [INFO][5061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.325 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.325 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.427 [WARNING][5061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.427 [INFO][5061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.445 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:20.474202 containerd[2027]: 2025-02-13 16:06:20.456 [INFO][5019] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:20.487375 systemd[1]: run-netns-cni\x2dc75a04b6\x2dfd7b\x2dd40e\x2dad2c\x2dfe4775099de1.mount: Deactivated successfully. Feb 13 16:06:20.494485 containerd[2027]: time="2025-02-13T16:06:20.483957767Z" level=info msg="TearDown network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\" successfully" Feb 13 16:06:20.494485 containerd[2027]: time="2025-02-13T16:06:20.491200175Z" level=info msg="StopPodSandbox for \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\" returns successfully" Feb 13 16:06:20.494806 containerd[2027]: time="2025-02-13T16:06:20.494596031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-ncn9d,Uid:a192015d-967a-446f-82a0-27e3609df636,Namespace:calico-apiserver,Attempt:1,}" Feb 13 16:06:20.675059 kubelet[3418]: I0213 16:06:20.673810 3418 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:20.725297 systemd-networkd[1859]: cali7726a139177: Link UP Feb 13 16:06:20.730494 systemd-networkd[1859]: cali7726a139177: Gained carrier Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:19.930 [INFO][4979] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0 calico-kube-controllers-7fdf6d44f6- calico-system bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1 817 0 2025-02-13 16:05:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fdf6d44f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-154 calico-kube-controllers-7fdf6d44f6-cv7cc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7726a139177 [] []}} ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:19.931 [INFO][4979] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.199 [INFO][5054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" HandleID="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.470 [INFO][5054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" HandleID="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000d75b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-154", "pod":"calico-kube-controllers-7fdf6d44f6-cv7cc", "timestamp":"2025-02-13 16:06:20.197924625 +0000 UTC"}, Hostname:"ip-172-31-31-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.470 [INFO][5054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.470 [INFO][5054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.470 [INFO][5054] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-154' Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.483 [INFO][5054] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.614 [INFO][5054] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.631 [INFO][5054] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.637 [INFO][5054] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.643 [INFO][5054] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.643 [INFO][5054] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.646 [INFO][5054] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.656 [INFO][5054] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.675 [INFO][5054] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.66/26] block=192.168.94.64/26 handle="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.675 [INFO][5054] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.66/26] handle="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" host="ip-172-31-31-154" Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.676 [INFO][5054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:20.813515 containerd[2027]: 2025-02-13 16:06:20.676 [INFO][5054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.66/26] IPv6=[] ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" HandleID="k8s-pod-network.de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:20.824233 containerd[2027]: 2025-02-13 16:06:20.688 [INFO][4979] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0", GenerateName:"calico-kube-controllers-7fdf6d44f6-", Namespace:"calico-system", SelfLink:"", UID:"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fdf6d44f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"", Pod:"calico-kube-controllers-7fdf6d44f6-cv7cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7726a139177", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:20.824233 containerd[2027]: 2025-02-13 16:06:20.688 [INFO][4979] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.66/32] ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:20.824233 containerd[2027]: 2025-02-13 16:06:20.688 [INFO][4979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7726a139177 ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:20.824233 containerd[2027]: 2025-02-13 16:06:20.734 [INFO][4979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:20.824233 containerd[2027]: 2025-02-13 16:06:20.736 [INFO][4979] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0", GenerateName:"calico-kube-controllers-7fdf6d44f6-", Namespace:"calico-system", SelfLink:"", UID:"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fdf6d44f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b", Pod:"calico-kube-controllers-7fdf6d44f6-cv7cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7726a139177", MAC:"02:da:ad:14:68:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:20.824233 containerd[2027]: 2025-02-13 16:06:20.787 [INFO][4979] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b" Namespace="calico-system" Pod="calico-kube-controllers-7fdf6d44f6-cv7cc" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:20.868587 systemd[1]: run-containerd-runc-k8s.io-67efb7d7ef3b180091b3ab8afb12a3f47cf9bb42c4f1f24a3390445251364261-runc.xvF2oL.mount: Deactivated successfully. Feb 13 16:06:20.996238 containerd[2027]: time="2025-02-13T16:06:20.994185829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:20.996238 containerd[2027]: time="2025-02-13T16:06:20.994321573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:20.996238 containerd[2027]: time="2025-02-13T16:06:20.994385629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:20.999896 containerd[2027]: time="2025-02-13T16:06:20.996982777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:21.022471 systemd-networkd[1859]: calice3781bcc59: Link UP Feb 13 16:06:21.024938 systemd-networkd[1859]: calice3781bcc59: Gained carrier Feb 13 16:06:21.094405 systemd[1]: Started cri-containerd-de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b.scope - libcontainer container de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b. Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.082 [INFO][5032] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0 coredns-6f6b679f8f- kube-system 75b40a5f-a447-41dd-bf81-01b27d71b463 819 0 2025-02-13 16:05:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-154 coredns-6f6b679f8f-jn895 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice3781bcc59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.085 [INFO][5032] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.518 [INFO][5068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" HandleID="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.685 [INFO][5068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" HandleID="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039ead0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-154", "pod":"coredns-6f6b679f8f-jn895", "timestamp":"2025-02-13 16:06:20.518901911 +0000 UTC"}, Hostname:"ip-172-31-31-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.685 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.685 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.685 [INFO][5068] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-154' Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.707 [INFO][5068] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.762 [INFO][5068] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.830 [INFO][5068] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.854 [INFO][5068] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.883 [INFO][5068] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.887 [INFO][5068] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.898 [INFO][5068] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33 Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.935 [INFO][5068] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.970 [INFO][5068] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.67/26] block=192.168.94.64/26 handle="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.972 [INFO][5068] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.67/26] handle="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" host="ip-172-31-31-154" Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.972 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:21.158352 containerd[2027]: 2025-02-13 16:06:20.972 [INFO][5068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.67/26] IPv6=[] ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" HandleID="k8s-pod-network.16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:21.160274 containerd[2027]: 2025-02-13 16:06:20.984 [INFO][5032] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75b40a5f-a447-41dd-bf81-01b27d71b463", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"", Pod:"coredns-6f6b679f8f-jn895", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice3781bcc59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:21.160274 containerd[2027]: 2025-02-13 16:06:20.984 [INFO][5032] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.67/32] ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:21.160274 containerd[2027]: 2025-02-13 16:06:20.984 [INFO][5032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice3781bcc59 ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:21.160274 containerd[2027]: 2025-02-13 16:06:21.026 [INFO][5032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:21.160274 containerd[2027]: 2025-02-13 16:06:21.079 [INFO][5032] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75b40a5f-a447-41dd-bf81-01b27d71b463", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33", Pod:"coredns-6f6b679f8f-jn895", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice3781bcc59", MAC:"ea:c7:e8:82:18:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:21.160274 containerd[2027]: 2025-02-13 16:06:21.152 [INFO][5032] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn895" WorkloadEndpoint="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:21.302626 systemd-networkd[1859]: cali77aa2e49be4: Link UP Feb 13 16:06:21.306444 systemd-networkd[1859]: cali77aa2e49be4: Gained carrier Feb 13 16:06:21.342906 containerd[2027]: time="2025-02-13T16:06:21.339969215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:21.342906 containerd[2027]: time="2025-02-13T16:06:21.340130663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:21.342906 containerd[2027]: time="2025-02-13T16:06:21.340171499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:21.342906 containerd[2027]: time="2025-02-13T16:06:21.340361603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:20.755 [INFO][5092] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0 calico-apiserver-68484dd575- calico-apiserver a192015d-967a-446f-82a0-27e3609df636 829 0 2025-02-13 16:05:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68484dd575 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-154 calico-apiserver-68484dd575-ncn9d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77aa2e49be4 [] []}} ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:20.755 [INFO][5092] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:20.951 [INFO][5128] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" HandleID="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.012 [INFO][5128] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" HandleID="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000101200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-154", "pod":"calico-apiserver-68484dd575-ncn9d", "timestamp":"2025-02-13 16:06:20.951166669 +0000 UTC"}, Hostname:"ip-172-31-31-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.012 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.013 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.013 [INFO][5128] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-154' Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.079 [INFO][5128] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.140 [INFO][5128] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.176 [INFO][5128] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.187 [INFO][5128] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.204 [INFO][5128] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.205 [INFO][5128] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.208 [INFO][5128] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3 Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.238 [INFO][5128] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.262 [INFO][5128] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.68/26] block=192.168.94.64/26 handle="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.263 [INFO][5128] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.68/26] handle="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" host="ip-172-31-31-154" Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.263 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:21.382818 containerd[2027]: 2025-02-13 16:06:21.264 [INFO][5128] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.68/26] IPv6=[] ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" HandleID="k8s-pod-network.87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:21.386302 containerd[2027]: 2025-02-13 16:06:21.275 [INFO][5092] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"a192015d-967a-446f-82a0-27e3609df636", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"", Pod:"calico-apiserver-68484dd575-ncn9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77aa2e49be4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:21.386302 containerd[2027]: 2025-02-13 16:06:21.275 [INFO][5092] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.68/32] ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:21.386302 containerd[2027]: 2025-02-13 16:06:21.275 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77aa2e49be4 ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:21.386302 containerd[2027]: 2025-02-13 16:06:21.312 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:21.386302 containerd[2027]: 2025-02-13 16:06:21.313 [INFO][5092] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"a192015d-967a-446f-82a0-27e3609df636", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3", Pod:"calico-apiserver-68484dd575-ncn9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77aa2e49be4", MAC:"7a:be:42:af:b9:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:21.386302 containerd[2027]: 2025-02-13 16:06:21.368 [INFO][5092] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-ncn9d" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:21.454402 systemd[1]: Started cri-containerd-16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33.scope - libcontainer container 16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33. Feb 13 16:06:21.473956 systemd-networkd[1859]: calic907a6034df: Link UP Feb 13 16:06:21.478759 systemd-networkd[1859]: calic907a6034df: Gained carrier Feb 13 16:06:21.524292 containerd[2027]: time="2025-02-13T16:06:21.518700048Z" level=info msg="StopPodSandbox for \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\"" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:20.680 [INFO][5081] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0 calico-apiserver-68484dd575- calico-apiserver b196c8b9-8607-4d55-8a5e-bd31f38b0664 828 0 2025-02-13 16:05:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68484dd575 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-154 calico-apiserver-68484dd575-8k4kl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic907a6034df [] []}} ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:20.680 [INFO][5081] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.138 [INFO][5113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" HandleID="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.192 [INFO][5113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" HandleID="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-154", "pod":"calico-apiserver-68484dd575-8k4kl", "timestamp":"2025-02-13 16:06:21.138495682 +0000 UTC"}, Hostname:"ip-172-31-31-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.196 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.263 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.264 [INFO][5113] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-154' Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.276 [INFO][5113] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.312 [INFO][5113] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.339 [INFO][5113] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.348 [INFO][5113] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.373 [INFO][5113] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.374 [INFO][5113] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.381 [INFO][5113] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.412 [INFO][5113] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.442 [INFO][5113] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.69/26] block=192.168.94.64/26 handle="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.442 [INFO][5113] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.69/26] handle="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" host="ip-172-31-31-154" Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.442 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:21.574886 containerd[2027]: 2025-02-13 16:06:21.442 [INFO][5113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.69/26] IPv6=[] ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" HandleID="k8s-pod-network.f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:21.579236 containerd[2027]: 2025-02-13 16:06:21.452 [INFO][5081] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"b196c8b9-8607-4d55-8a5e-bd31f38b0664", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"", Pod:"calico-apiserver-68484dd575-8k4kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic907a6034df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:21.579236 containerd[2027]: 2025-02-13 16:06:21.452 [INFO][5081] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.69/32] ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:21.579236 containerd[2027]: 2025-02-13 16:06:21.452 [INFO][5081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic907a6034df ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:21.579236 containerd[2027]: 2025-02-13 16:06:21.476 [INFO][5081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:21.579236 containerd[2027]: 2025-02-13 16:06:21.486 [INFO][5081] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"b196c8b9-8607-4d55-8a5e-bd31f38b0664", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea", Pod:"calico-apiserver-68484dd575-8k4kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic907a6034df", MAC:"e2:5c:3f:eb:05:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:21.579236 containerd[2027]: 2025-02-13 16:06:21.548 [INFO][5081] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea" Namespace="calico-apiserver" Pod="calico-apiserver-68484dd575-8k4kl" WorkloadEndpoint="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:21.593341 containerd[2027]: time="2025-02-13T16:06:21.592180908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:21.593341 containerd[2027]: time="2025-02-13T16:06:21.592328196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:21.593924 containerd[2027]: time="2025-02-13T16:06:21.593646504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:21.597212 containerd[2027]: time="2025-02-13T16:06:21.596851644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:21.680984 containerd[2027]: time="2025-02-13T16:06:21.680376505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:21.688613 containerd[2027]: time="2025-02-13T16:06:21.683334745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:21.699675 containerd[2027]: time="2025-02-13T16:06:21.691981993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:21.699675 containerd[2027]: time="2025-02-13T16:06:21.692393797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:21.703802 systemd[1]: Started cri-containerd-87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3.scope - libcontainer container 87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3. Feb 13 16:06:21.860426 systemd[1]: Started cri-containerd-f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea.scope - libcontainer container f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea. Feb 13 16:06:21.875643 containerd[2027]: time="2025-02-13T16:06:21.875385350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn895,Uid:75b40a5f-a447-41dd-bf81-01b27d71b463,Namespace:kube-system,Attempt:1,} returns sandbox id \"16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33\"" Feb 13 16:06:21.918016 containerd[2027]: time="2025-02-13T16:06:21.917897354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fdf6d44f6-cv7cc,Uid:bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1,Namespace:calico-system,Attempt:1,} returns sandbox id \"de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b\"" Feb 13 16:06:21.930462 containerd[2027]: time="2025-02-13T16:06:21.930143642Z" level=info msg="CreateContainer within sandbox \"16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:21.936922 containerd[2027]: time="2025-02-13T16:06:21.936406742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 16:06:22.032439 containerd[2027]: time="2025-02-13T16:06:22.032343827Z" level=info msg="CreateContainer within sandbox \"16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d02fbab20f7fffb5d7a818b6af61122a5c932b4978f4ecacf432836bd4978143\"" Feb 13 16:06:22.038004 containerd[2027]: time="2025-02-13T16:06:22.037849775Z" level=info msg="StartContainer for \"d02fbab20f7fffb5d7a818b6af61122a5c932b4978f4ecacf432836bd4978143\"" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:21.968 [INFO][5270] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:21.972 [INFO][5270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" iface="eth0" netns="/var/run/netns/cni-b820598f-3233-bfe9-cfc1-d7f3b808629f" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:21.972 [INFO][5270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" iface="eth0" netns="/var/run/netns/cni-b820598f-3233-bfe9-cfc1-d7f3b808629f" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:21.973 [INFO][5270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" iface="eth0" netns="/var/run/netns/cni-b820598f-3233-bfe9-cfc1-d7f3b808629f" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:21.973 [INFO][5270] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:21.973 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:22.116 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:22.118 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:22.118 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:22.180 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:22.181 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:22.195 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:22.210965 containerd[2027]: 2025-02-13 16:06:22.205 [INFO][5270] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:22.215391 containerd[2027]: time="2025-02-13T16:06:22.214562315Z" level=info msg="TearDown network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\" successfully" Feb 13 16:06:22.215391 containerd[2027]: time="2025-02-13T16:06:22.214611515Z" level=info msg="StopPodSandbox for \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\" returns successfully" Feb 13 16:06:22.216909 containerd[2027]: time="2025-02-13T16:06:22.216135335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4668,Uid:a33f313d-041d-426a-9914-5ac8d0c42820,Namespace:calico-system,Attempt:1,}" Feb 13 16:06:22.217744 systemd[1]: Started cri-containerd-d02fbab20f7fffb5d7a818b6af61122a5c932b4978f4ecacf432836bd4978143.scope - libcontainer container d02fbab20f7fffb5d7a818b6af61122a5c932b4978f4ecacf432836bd4978143. Feb 13 16:06:22.348754 containerd[2027]: time="2025-02-13T16:06:22.348625860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-ncn9d,Uid:a192015d-967a-446f-82a0-27e3609df636,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3\"" Feb 13 16:06:22.373114 containerd[2027]: time="2025-02-13T16:06:22.372241656Z" level=info msg="StartContainer for \"d02fbab20f7fffb5d7a818b6af61122a5c932b4978f4ecacf432836bd4978143\" returns successfully" Feb 13 16:06:22.446212 systemd-networkd[1859]: cali77aa2e49be4: Gained IPv6LL Feb 13 16:06:22.573542 systemd-networkd[1859]: cali7726a139177: Gained IPv6LL Feb 13 16:06:22.637307 systemd-networkd[1859]: calice3781bcc59: Gained IPv6LL Feb 13 16:06:22.678688 containerd[2027]: time="2025-02-13T16:06:22.678612578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68484dd575-8k4kl,Uid:b196c8b9-8607-4d55-8a5e-bd31f38b0664,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea\"" Feb 13 16:06:22.738384 systemd-networkd[1859]: cali6c96ba65918: Link UP Feb 13 16:06:22.742396 systemd-networkd[1859]: cali6c96ba65918: Gained carrier Feb 13 16:06:22.766321 systemd-networkd[1859]: calic907a6034df: Gained IPv6LL Feb 13 16:06:22.814292 systemd[1]: run-netns-cni\x2db820598f\x2d3233\x2dbfe9\x2dcfc1\x2dd7f3b808629f.mount: Deactivated successfully. Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.426 [INFO][5388] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0 csi-node-driver- calico-system a33f313d-041d-426a-9914-5ac8d0c42820 863 0 2025-02-13 16:05:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-154 csi-node-driver-t4668 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6c96ba65918 [] []}} ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.427 [INFO][5388] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.553 [INFO][5418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" HandleID="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.576 [INFO][5418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" HandleID="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3f30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-154", "pod":"csi-node-driver-t4668", "timestamp":"2025-02-13 16:06:22.553017613 +0000 UTC"}, Hostname:"ip-172-31-31-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.576 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.576 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.576 [INFO][5418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-154' Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.584 [INFO][5418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.600 [INFO][5418] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.634 [INFO][5418] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.650 [INFO][5418] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.658 [INFO][5418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.658 [INFO][5418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.669 [INFO][5418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.694 [INFO][5418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.716 [INFO][5418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.70/26] block=192.168.94.64/26 handle="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.716 [INFO][5418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.70/26] handle="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" host="ip-172-31-31-154" Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.716 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:22.825301 containerd[2027]: 2025-02-13 16:06:22.716 [INFO][5418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.70/26] IPv6=[] ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" HandleID="k8s-pod-network.c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.829773 containerd[2027]: 2025-02-13 16:06:22.723 [INFO][5388] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a33f313d-041d-426a-9914-5ac8d0c42820", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"", Pod:"csi-node-driver-t4668", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c96ba65918", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:22.829773 containerd[2027]: 2025-02-13 16:06:22.724 [INFO][5388] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.70/32] ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.829773 containerd[2027]: 2025-02-13 16:06:22.724 [INFO][5388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c96ba65918 ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.829773 containerd[2027]: 2025-02-13 16:06:22.744 [INFO][5388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.829773 containerd[2027]: 2025-02-13 16:06:22.746 [INFO][5388] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a33f313d-041d-426a-9914-5ac8d0c42820", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad", Pod:"csi-node-driver-t4668", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c96ba65918", MAC:"92:58:b7:38:2f:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:22.829773 containerd[2027]: 2025-02-13 16:06:22.788 [INFO][5388] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad" Namespace="calico-system" Pod="csi-node-driver-t4668" WorkloadEndpoint="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:22.923850 systemd[1]: run-containerd-runc-k8s.io-67efb7d7ef3b180091b3ab8afb12a3f47cf9bb42c4f1f24a3390445251364261-runc.uKJHi4.mount: Deactivated successfully. Feb 13 16:06:23.003736 containerd[2027]: time="2025-02-13T16:06:23.001625003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:23.003736 containerd[2027]: time="2025-02-13T16:06:23.001754675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:23.003736 containerd[2027]: time="2025-02-13T16:06:23.001783223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:23.007587 containerd[2027]: time="2025-02-13T16:06:23.004747091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:23.115496 systemd[1]: Started cri-containerd-c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad.scope - libcontainer container c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad. Feb 13 16:06:23.223730 kubelet[3418]: I0213 16:06:23.223594 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jn895" podStartSLOduration=41.223570188 podStartE2EDuration="41.223570188s" podCreationTimestamp="2025-02-13 16:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:23.172880004 +0000 UTC m=+46.898099814" watchObservedRunningTime="2025-02-13 16:06:23.223570188 +0000 UTC m=+46.948789986" Feb 13 16:06:23.366268 containerd[2027]: time="2025-02-13T16:06:23.366190345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4668,Uid:a33f313d-041d-426a-9914-5ac8d0c42820,Namespace:calico-system,Attempt:1,} returns sandbox id \"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad\"" Feb 13 16:06:23.800738 systemd[1]: run-containerd-runc-k8s.io-c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad-runc.0gC301.mount: Deactivated successfully. Feb 13 16:06:24.685348 systemd-networkd[1859]: cali6c96ba65918: Gained IPv6LL Feb 13 16:06:24.785817 systemd[1]: Started sshd@10-172.31.31.154:22-139.178.68.195:52726.service - OpenSSH per-connection server daemon (139.178.68.195:52726). Feb 13 16:06:25.016924 sshd[5528]: Accepted publickey for core from 139.178.68.195 port 52726 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:25.028232 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:25.047299 systemd-logind[2001]: New session 11 of user core. Feb 13 16:06:25.054889 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:06:25.393883 sshd[5528]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:25.403875 systemd[1]: sshd@10-172.31.31.154:22-139.178.68.195:52726.service: Deactivated successfully. Feb 13 16:06:25.411967 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:06:25.418339 systemd-logind[2001]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:06:25.426080 systemd-logind[2001]: Removed session 11. Feb 13 16:06:25.599709 containerd[2027]: time="2025-02-13T16:06:25.599622136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.602391 containerd[2027]: time="2025-02-13T16:06:25.602296792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 16:06:25.604661 containerd[2027]: time="2025-02-13T16:06:25.604561084Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.610248 containerd[2027]: time="2025-02-13T16:06:25.609699232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.611191 containerd[2027]: time="2025-02-13T16:06:25.611130688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.67445325s" Feb 13 16:06:25.611191 containerd[2027]: time="2025-02-13T16:06:25.611189296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 16:06:25.614545 containerd[2027]: time="2025-02-13T16:06:25.614484652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 16:06:25.653812 containerd[2027]: time="2025-02-13T16:06:25.653475196Z" level=info msg="CreateContainer within sandbox \"de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 16:06:25.694671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273710385.mount: Deactivated successfully. Feb 13 16:06:25.702926 containerd[2027]: time="2025-02-13T16:06:25.695298197Z" level=info msg="CreateContainer within sandbox \"de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"32c7907842c42e238496c5afc8384e8e03ac0c2e5415809669840a263c7d3b84\"" Feb 13 16:06:25.702926 containerd[2027]: time="2025-02-13T16:06:25.702108329Z" level=info msg="StartContainer for \"32c7907842c42e238496c5afc8384e8e03ac0c2e5415809669840a263c7d3b84\"" Feb 13 16:06:25.757304 systemd[1]: Started cri-containerd-32c7907842c42e238496c5afc8384e8e03ac0c2e5415809669840a263c7d3b84.scope - libcontainer container 32c7907842c42e238496c5afc8384e8e03ac0c2e5415809669840a263c7d3b84. Feb 13 16:06:25.834919 containerd[2027]: time="2025-02-13T16:06:25.834824933Z" level=info msg="StartContainer for \"32c7907842c42e238496c5afc8384e8e03ac0c2e5415809669840a263c7d3b84\" returns successfully" Feb 13 16:06:26.210584 kubelet[3418]: I0213 16:06:26.210448 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fdf6d44f6-cv7cc" podStartSLOduration=31.523608421 podStartE2EDuration="35.210398403s" podCreationTimestamp="2025-02-13 16:05:51 +0000 UTC" firstStartedPulling="2025-02-13 16:06:21.926970074 +0000 UTC m=+45.652189884" lastFinishedPulling="2025-02-13 16:06:25.613760056 +0000 UTC m=+49.338979866" observedRunningTime="2025-02-13 16:06:26.207806715 +0000 UTC m=+49.933026609" watchObservedRunningTime="2025-02-13 16:06:26.210398403 +0000 UTC m=+49.935618201" Feb 13 16:06:26.818230 ntpd[1993]: Listen normally on 8 vxlan.calico 192.168.94.64:123 Feb 13 16:06:26.818381 ntpd[1993]: Listen normally on 9 vxlan.calico [fe80::648d:cff:fe7a:30d9%4]:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 8 vxlan.calico 192.168.94.64:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 9 vxlan.calico [fe80::648d:cff:fe7a:30d9%4]:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 10 calia030ea3496b [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 11 cali7726a139177 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 12 calice3781bcc59 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 13 cali77aa2e49be4 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 14 calic907a6034df [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 16:06:26.818930 ntpd[1993]: 13 Feb 16:06:26 ntpd[1993]: Listen normally on 15 cali6c96ba65918 [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 16:06:26.818473 ntpd[1993]: Listen normally on 10 calia030ea3496b [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 16:06:26.818543 ntpd[1993]: Listen normally on 11 cali7726a139177 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 16:06:26.818611 ntpd[1993]: Listen normally on 12 calice3781bcc59 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 16:06:26.818687 ntpd[1993]: Listen normally on 13 cali77aa2e49be4 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 16:06:26.818760 ntpd[1993]: Listen normally on 14 calic907a6034df [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 16:06:26.818831 ntpd[1993]: Listen normally on 15 cali6c96ba65918 [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 16:06:27.179827 kubelet[3418]: I0213 16:06:27.179419 3418 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:28.785508 containerd[2027]: time="2025-02-13T16:06:28.785129216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:28.787568 containerd[2027]: time="2025-02-13T16:06:28.787500344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 16:06:28.789238 containerd[2027]: time="2025-02-13T16:06:28.789052232Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:28.794818 containerd[2027]: time="2025-02-13T16:06:28.794695988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:28.797070 containerd[2027]: time="2025-02-13T16:06:28.796442780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.181694548s" Feb 13 16:06:28.797070 containerd[2027]: time="2025-02-13T16:06:28.796515728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 16:06:28.803081 containerd[2027]: time="2025-02-13T16:06:28.802422764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 16:06:28.804976 containerd[2027]: time="2025-02-13T16:06:28.804905612Z" level=info msg="CreateContainer within sandbox \"87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 16:06:28.839518 containerd[2027]: time="2025-02-13T16:06:28.839439152Z" level=info msg="CreateContainer within sandbox \"87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"558bb8453b8dda358e057afec22664d267779ad91995e469b8170c65d9839a3b\"" Feb 13 16:06:28.844461 containerd[2027]: time="2025-02-13T16:06:28.843682928Z" level=info msg="StartContainer for \"558bb8453b8dda358e057afec22664d267779ad91995e469b8170c65d9839a3b\"" Feb 13 16:06:28.919334 systemd[1]: Started cri-containerd-558bb8453b8dda358e057afec22664d267779ad91995e469b8170c65d9839a3b.scope - libcontainer container 558bb8453b8dda358e057afec22664d267779ad91995e469b8170c65d9839a3b. Feb 13 16:06:29.022485 containerd[2027]: time="2025-02-13T16:06:29.022411013Z" level=info msg="StartContainer for \"558bb8453b8dda358e057afec22664d267779ad91995e469b8170c65d9839a3b\" returns successfully" Feb 13 16:06:29.189283 containerd[2027]: time="2025-02-13T16:06:29.188820222Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:29.191396 containerd[2027]: time="2025-02-13T16:06:29.191335482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 16:06:29.204023 containerd[2027]: time="2025-02-13T16:06:29.202657674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 400.170218ms" Feb 13 16:06:29.204023 containerd[2027]: time="2025-02-13T16:06:29.202724226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 16:06:29.210530 containerd[2027]: time="2025-02-13T16:06:29.210454998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 16:06:29.216552 containerd[2027]: time="2025-02-13T16:06:29.216288126Z" level=info msg="CreateContainer within sandbox \"f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 16:06:29.261119 containerd[2027]: time="2025-02-13T16:06:29.261044646Z" level=info msg="CreateContainer within sandbox \"f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"43ffb96cbfe4929e23611caa22d8b82fef3366159f1c60f3822adcc43c5ebc46\"" Feb 13 16:06:29.262897 containerd[2027]: time="2025-02-13T16:06:29.262767426Z" level=info msg="StartContainer for \"43ffb96cbfe4929e23611caa22d8b82fef3366159f1c60f3822adcc43c5ebc46\"" Feb 13 16:06:29.367435 systemd[1]: Started cri-containerd-43ffb96cbfe4929e23611caa22d8b82fef3366159f1c60f3822adcc43c5ebc46.scope - libcontainer container 43ffb96cbfe4929e23611caa22d8b82fef3366159f1c60f3822adcc43c5ebc46. Feb 13 16:06:29.530184 containerd[2027]: time="2025-02-13T16:06:29.530073620Z" level=info msg="StartContainer for \"43ffb96cbfe4929e23611caa22d8b82fef3366159f1c60f3822adcc43c5ebc46\" returns successfully" Feb 13 16:06:30.229194 kubelet[3418]: I0213 16:06:30.229032 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68484dd575-8k4kl" podStartSLOduration=33.706884315 podStartE2EDuration="40.228966403s" podCreationTimestamp="2025-02-13 16:05:50 +0000 UTC" firstStartedPulling="2025-02-13 16:06:22.682386722 +0000 UTC m=+46.407606532" lastFinishedPulling="2025-02-13 16:06:29.20446881 +0000 UTC m=+52.929688620" observedRunningTime="2025-02-13 16:06:30.228012127 +0000 UTC m=+53.953231973" watchObservedRunningTime="2025-02-13 16:06:30.228966403 +0000 UTC m=+53.954186213" Feb 13 16:06:30.232681 kubelet[3418]: I0213 16:06:30.232311 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68484dd575-ncn9d" podStartSLOduration=33.791738463 podStartE2EDuration="40.232286299s" podCreationTimestamp="2025-02-13 16:05:50 +0000 UTC" firstStartedPulling="2025-02-13 16:06:22.357570768 +0000 UTC m=+46.082790578" lastFinishedPulling="2025-02-13 16:06:28.798118604 +0000 UTC m=+52.523338414" observedRunningTime="2025-02-13 16:06:29.278593158 +0000 UTC m=+53.003813064" watchObservedRunningTime="2025-02-13 16:06:30.232286299 +0000 UTC m=+53.957506133" Feb 13 16:06:30.442584 systemd[1]: Started sshd@11-172.31.31.154:22-139.178.68.195:40692.service - OpenSSH per-connection server daemon (139.178.68.195:40692). Feb 13 16:06:30.484347 kubelet[3418]: I0213 16:06:30.480664 3418 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:30.670297 sshd[5672]: Accepted publickey for core from 139.178.68.195 port 40692 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:30.673678 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:30.692434 systemd-logind[2001]: New session 12 of user core. Feb 13 16:06:30.701364 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:06:31.116593 sshd[5672]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:31.134671 systemd[1]: sshd@11-172.31.31.154:22-139.178.68.195:40692.service: Deactivated successfully. Feb 13 16:06:31.141251 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:06:31.150349 systemd-logind[2001]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:06:31.184154 systemd[1]: Started sshd@12-172.31.31.154:22-139.178.68.195:40696.service - OpenSSH per-connection server daemon (139.178.68.195:40696). Feb 13 16:06:31.187490 systemd-logind[2001]: Removed session 12. Feb 13 16:06:31.214945 kubelet[3418]: I0213 16:06:31.213959 3418 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:31.358344 containerd[2027]: time="2025-02-13T16:06:31.357248973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:31.361189 containerd[2027]: time="2025-02-13T16:06:31.361142733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 16:06:31.366106 containerd[2027]: time="2025-02-13T16:06:31.366051273Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:31.370115 containerd[2027]: time="2025-02-13T16:06:31.369948297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:31.375348 containerd[2027]: time="2025-02-13T16:06:31.372575925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.162050199s" Feb 13 16:06:31.375348 containerd[2027]: time="2025-02-13T16:06:31.374193009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 16:06:31.384651 containerd[2027]: time="2025-02-13T16:06:31.384329613Z" level=info msg="CreateContainer within sandbox \"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 16:06:31.416199 containerd[2027]: time="2025-02-13T16:06:31.416136405Z" level=info msg="CreateContainer within sandbox \"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"17354706b6ab028450010e816d8e877a6553efb6e547df4d207e209ccd6d59fd\"" Feb 13 16:06:31.419074 containerd[2027]: time="2025-02-13T16:06:31.417424257Z" level=info msg="StartContainer for \"17354706b6ab028450010e816d8e877a6553efb6e547df4d207e209ccd6d59fd\"" Feb 13 16:06:31.430264 sshd[5734]: Accepted publickey for core from 139.178.68.195 port 40696 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:31.441330 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:31.463332 systemd-logind[2001]: New session 13 of user core. Feb 13 16:06:31.473218 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:06:31.547439 systemd[1]: run-containerd-runc-k8s.io-17354706b6ab028450010e816d8e877a6553efb6e547df4d207e209ccd6d59fd-runc.4VnDyi.mount: Deactivated successfully. Feb 13 16:06:31.563483 systemd[1]: Started cri-containerd-17354706b6ab028450010e816d8e877a6553efb6e547df4d207e209ccd6d59fd.scope - libcontainer container 17354706b6ab028450010e816d8e877a6553efb6e547df4d207e209ccd6d59fd. Feb 13 16:06:31.830463 containerd[2027]: time="2025-02-13T16:06:31.830324927Z" level=info msg="StartContainer for \"17354706b6ab028450010e816d8e877a6553efb6e547df4d207e209ccd6d59fd\" returns successfully" Feb 13 16:06:31.840334 containerd[2027]: time="2025-02-13T16:06:31.837589427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 16:06:31.974497 sshd[5734]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:31.983985 systemd[1]: sshd@12-172.31.31.154:22-139.178.68.195:40696.service: Deactivated successfully. Feb 13 16:06:31.984270 systemd-logind[2001]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:06:31.999556 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:06:32.041497 systemd[1]: Started sshd@13-172.31.31.154:22-139.178.68.195:40700.service - OpenSSH per-connection server daemon (139.178.68.195:40700). Feb 13 16:06:32.045747 systemd-logind[2001]: Removed session 13. Feb 13 16:06:32.230443 kubelet[3418]: I0213 16:06:32.230394 3418 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:32.250039 sshd[5783]: Accepted publickey for core from 139.178.68.195 port 40700 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:32.253643 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:32.271095 systemd-logind[2001]: New session 14 of user core. Feb 13 16:06:32.276297 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:06:32.617373 sshd[5783]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:32.628424 systemd[1]: sshd@13-172.31.31.154:22-139.178.68.195:40700.service: Deactivated successfully. Feb 13 16:06:32.639314 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:06:32.649778 systemd-logind[2001]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:06:32.652242 systemd-logind[2001]: Removed session 14. Feb 13 16:06:34.002055 containerd[2027]: time="2025-02-13T16:06:34.001654402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:34.006833 containerd[2027]: time="2025-02-13T16:06:34.006125458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 16:06:34.006833 containerd[2027]: time="2025-02-13T16:06:34.006369070Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:34.012954 containerd[2027]: time="2025-02-13T16:06:34.012877702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:34.014160 containerd[2027]: time="2025-02-13T16:06:34.014084710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 2.176428551s" Feb 13 16:06:34.014160 containerd[2027]: time="2025-02-13T16:06:34.014154874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 16:06:34.020094 containerd[2027]: time="2025-02-13T16:06:34.020027758Z" level=info msg="CreateContainer within sandbox \"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 16:06:34.058721 containerd[2027]: time="2025-02-13T16:06:34.058656562Z" level=info msg="CreateContainer within sandbox \"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b9b448137f4307a7749b1866082a86fb54aa81f020ae8e1404fb0c11c5d10e3c\"" Feb 13 16:06:34.059701 containerd[2027]: time="2025-02-13T16:06:34.059645650Z" level=info msg="StartContainer for \"b9b448137f4307a7749b1866082a86fb54aa81f020ae8e1404fb0c11c5d10e3c\"" Feb 13 16:06:34.168458 systemd[1]: Started cri-containerd-b9b448137f4307a7749b1866082a86fb54aa81f020ae8e1404fb0c11c5d10e3c.scope - libcontainer container b9b448137f4307a7749b1866082a86fb54aa81f020ae8e1404fb0c11c5d10e3c. Feb 13 16:06:34.289308 containerd[2027]: time="2025-02-13T16:06:34.288673391Z" level=info msg="StartContainer for \"b9b448137f4307a7749b1866082a86fb54aa81f020ae8e1404fb0c11c5d10e3c\" returns successfully" Feb 13 16:06:34.740041 kubelet[3418]: I0213 16:06:34.739449 3418 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 16:06:34.740041 kubelet[3418]: I0213 16:06:34.739511 3418 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 16:06:36.514312 containerd[2027]: time="2025-02-13T16:06:36.514243886Z" level=info msg="StopPodSandbox for \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\"" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.590 [WARNING][5853] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"b196c8b9-8607-4d55-8a5e-bd31f38b0664", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea", Pod:"calico-apiserver-68484dd575-8k4kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic907a6034df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.591 [INFO][5853] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.591 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" iface="eth0" netns="" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.591 [INFO][5853] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.591 [INFO][5853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.630 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.631 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.631 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.643 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.643 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.645 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:36.651815 containerd[2027]: 2025-02-13 16:06:36.648 [INFO][5853] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.651815 containerd[2027]: time="2025-02-13T16:06:36.651577563Z" level=info msg="TearDown network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\" successfully" Feb 13 16:06:36.651815 containerd[2027]: time="2025-02-13T16:06:36.651614427Z" level=info msg="StopPodSandbox for \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\" returns successfully" Feb 13 16:06:36.652868 containerd[2027]: time="2025-02-13T16:06:36.652670307Z" level=info msg="RemovePodSandbox for \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\"" Feb 13 16:06:36.652868 containerd[2027]: time="2025-02-13T16:06:36.652723827Z" level=info msg="Forcibly stopping sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\"" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.742 [WARNING][5880] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"b196c8b9-8607-4d55-8a5e-bd31f38b0664", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"f0dee7fa442a3a74e6b2716ac4a7cf7a646633b819367dbc196c086ae72a5bea", Pod:"calico-apiserver-68484dd575-8k4kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic907a6034df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.742 [INFO][5880] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.742 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" iface="eth0" netns="" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.743 [INFO][5880] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.743 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.788 [INFO][5887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.788 [INFO][5887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.788 [INFO][5887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.802 [WARNING][5887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.803 [INFO][5887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" HandleID="k8s-pod-network.15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--8k4kl-eth0" Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.805 [INFO][5887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:36.811524 containerd[2027]: 2025-02-13 16:06:36.808 [INFO][5880] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7" Feb 13 16:06:36.812495 containerd[2027]: time="2025-02-13T16:06:36.811497424Z" level=info msg="TearDown network for sandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\" successfully" Feb 13 16:06:36.820079 containerd[2027]: time="2025-02-13T16:06:36.819931960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:36.820079 containerd[2027]: time="2025-02-13T16:06:36.820074532Z" level=info msg="RemovePodSandbox \"15e6cc11724dce1cf2296faa3064c914c850f713a790e9b5e87d4592ea01a8a7\" returns successfully" Feb 13 16:06:36.821552 containerd[2027]: time="2025-02-13T16:06:36.821085052Z" level=info msg="StopPodSandbox for \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\"" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.897 [WARNING][5905] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0", GenerateName:"calico-kube-controllers-7fdf6d44f6-", Namespace:"calico-system", SelfLink:"", UID:"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fdf6d44f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b", Pod:"calico-kube-controllers-7fdf6d44f6-cv7cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7726a139177", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.897 [INFO][5905] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.898 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" iface="eth0" netns="" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.898 [INFO][5905] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.898 [INFO][5905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.950 [INFO][5911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.950 [INFO][5911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.950 [INFO][5911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.962 [WARNING][5911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.962 [INFO][5911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.965 [INFO][5911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:36.970424 containerd[2027]: 2025-02-13 16:06:36.968 [INFO][5905] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:36.971734 containerd[2027]: time="2025-02-13T16:06:36.971178809Z" level=info msg="TearDown network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\" successfully" Feb 13 16:06:36.971734 containerd[2027]: time="2025-02-13T16:06:36.971219645Z" level=info msg="StopPodSandbox for \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\" returns successfully" Feb 13 16:06:36.972974 containerd[2027]: time="2025-02-13T16:06:36.972463601Z" level=info msg="RemovePodSandbox for \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\"" Feb 13 16:06:36.972974 containerd[2027]: time="2025-02-13T16:06:36.972516041Z" level=info msg="Forcibly stopping sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\"" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.051 [WARNING][5929] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0", GenerateName:"calico-kube-controllers-7fdf6d44f6-", Namespace:"calico-system", SelfLink:"", UID:"bdbd4dc3-7177-43ef-82e7-b7dee3c76ac1", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fdf6d44f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"de315f85d1fda1ce884f7c65b1f0a7795d4e1ac9a99e1bebd535afb4ff946f3b", Pod:"calico-kube-controllers-7fdf6d44f6-cv7cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7726a139177", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.051 [INFO][5929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.051 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" iface="eth0" netns="" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.051 [INFO][5929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.051 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.118 [INFO][5936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.119 [INFO][5936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.119 [INFO][5936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.133 [WARNING][5936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.133 [INFO][5936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" HandleID="k8s-pod-network.201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Workload="ip--172--31--31--154-k8s-calico--kube--controllers--7fdf6d44f6--cv7cc-eth0" Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.135 [INFO][5936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.142445 containerd[2027]: 2025-02-13 16:06:37.138 [INFO][5929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24" Feb 13 16:06:37.143563 containerd[2027]: time="2025-02-13T16:06:37.143319914Z" level=info msg="TearDown network for sandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\" successfully" Feb 13 16:06:37.151519 containerd[2027]: time="2025-02-13T16:06:37.151442390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:37.151698 containerd[2027]: time="2025-02-13T16:06:37.151548710Z" level=info msg="RemovePodSandbox \"201dfd1118a56407c1c659545970131ce444dda22676a40cf776265e085dcb24\" returns successfully" Feb 13 16:06:37.152883 containerd[2027]: time="2025-02-13T16:06:37.152339798Z" level=info msg="StopPodSandbox for \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\"" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.222 [WARNING][5955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75b40a5f-a447-41dd-bf81-01b27d71b463", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33", Pod:"coredns-6f6b679f8f-jn895", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice3781bcc59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.223 [INFO][5955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.223 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" iface="eth0" netns="" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.223 [INFO][5955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.223 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.259 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.259 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.259 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.275 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.275 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.282 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.290836 containerd[2027]: 2025-02-13 16:06:37.288 [INFO][5955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.292701 containerd[2027]: time="2025-02-13T16:06:37.290893526Z" level=info msg="TearDown network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\" successfully" Feb 13 16:06:37.292701 containerd[2027]: time="2025-02-13T16:06:37.290939630Z" level=info msg="StopPodSandbox for \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\" returns successfully" Feb 13 16:06:37.292701 containerd[2027]: time="2025-02-13T16:06:37.291692222Z" level=info msg="RemovePodSandbox for \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\"" Feb 13 16:06:37.292701 containerd[2027]: time="2025-02-13T16:06:37.291742514Z" level=info msg="Forcibly stopping sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\"" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.358 [WARNING][5980] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75b40a5f-a447-41dd-bf81-01b27d71b463", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"16e42a936dd8015b22f10bf5808fee20dede6fd59668117f8a54e1851f5c3b33", Pod:"coredns-6f6b679f8f-jn895", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice3781bcc59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.359 [INFO][5980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.359 [INFO][5980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" iface="eth0" netns="" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.359 [INFO][5980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.359 [INFO][5980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.410 [INFO][5986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.413 [INFO][5986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.413 [INFO][5986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.429 [WARNING][5986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.429 [INFO][5986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" HandleID="k8s-pod-network.2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--jn895-eth0" Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.431 [INFO][5986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.437080 containerd[2027]: 2025-02-13 16:06:37.434 [INFO][5980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755" Feb 13 16:06:37.437080 containerd[2027]: time="2025-02-13T16:06:37.436935903Z" level=info msg="TearDown network for sandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\" successfully" Feb 13 16:06:37.445077 containerd[2027]: time="2025-02-13T16:06:37.444900015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:37.445077 containerd[2027]: time="2025-02-13T16:06:37.445028307Z" level=info msg="RemovePodSandbox \"2dcbfae7ff0130b5b30a5c73204f3507fbc0fd75677f3f300951f9a4d7e4d755\" returns successfully" Feb 13 16:06:37.445691 containerd[2027]: time="2025-02-13T16:06:37.445640547Z" level=info msg="StopPodSandbox for \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\"" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.519 [WARNING][6010] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a30bf926-3951-47c8-a0ab-9fac70d16aca", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f", Pod:"coredns-6f6b679f8f-mpmb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia030ea3496b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.519 [INFO][6010] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.520 [INFO][6010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" iface="eth0" netns="" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.520 [INFO][6010] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.520 [INFO][6010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.563 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.563 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.563 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.576 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.576 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.578 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.583774 containerd[2027]: 2025-02-13 16:06:37.581 [INFO][6010] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.585590 containerd[2027]: time="2025-02-13T16:06:37.583967080Z" level=info msg="TearDown network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\" successfully" Feb 13 16:06:37.585590 containerd[2027]: time="2025-02-13T16:06:37.584044036Z" level=info msg="StopPodSandbox for \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\" returns successfully" Feb 13 16:06:37.586584 containerd[2027]: time="2025-02-13T16:06:37.585848044Z" level=info msg="RemovePodSandbox for \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\"" Feb 13 16:06:37.586584 containerd[2027]: time="2025-02-13T16:06:37.585912124Z" level=info msg="Forcibly stopping sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\"" Feb 13 16:06:37.664501 systemd[1]: Started sshd@14-172.31.31.154:22-139.178.68.195:37026.service - OpenSSH per-connection server daemon (139.178.68.195:37026). Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.679 [WARNING][6034] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a30bf926-3951-47c8-a0ab-9fac70d16aca", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"87512dd054318dbffb9fb1ba385d23a6115fa7920e0f509ee0194def7293208f", Pod:"coredns-6f6b679f8f-mpmb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia030ea3496b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.681 [INFO][6034] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.681 [INFO][6034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" iface="eth0" netns="" Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.682 [INFO][6034] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.682 [INFO][6034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.752 [INFO][6044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.752 [INFO][6044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.752 [INFO][6044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.767 [WARNING][6044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.767 [INFO][6044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" HandleID="k8s-pod-network.317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Workload="ip--172--31--31--154-k8s-coredns--6f6b679f8f--mpmb9-eth0" Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.769 [INFO][6044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.775359 containerd[2027]: 2025-02-13 16:06:37.772 [INFO][6034] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc" Feb 13 16:06:37.777171 containerd[2027]: time="2025-02-13T16:06:37.776683565Z" level=info msg="TearDown network for sandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\" successfully" Feb 13 16:06:37.784690 containerd[2027]: time="2025-02-13T16:06:37.784519577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:37.784896 containerd[2027]: time="2025-02-13T16:06:37.784696709Z" level=info msg="RemovePodSandbox \"317244992703cc9d572367864a902c7e8f05b40cff144bb47258544e47fe4ecc\" returns successfully" Feb 13 16:06:37.785492 containerd[2027]: time="2025-02-13T16:06:37.785430185Z" level=info msg="StopPodSandbox for \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\"" Feb 13 16:06:37.869527 sshd[6043]: Accepted publickey for core from 139.178.68.195 port 37026 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:37.874096 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:37.886390 systemd-logind[2001]: New session 15 of user core. Feb 13 16:06:37.891423 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.871 [WARNING][6065] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a33f313d-041d-426a-9914-5ac8d0c42820", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad", Pod:"csi-node-driver-t4668", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c96ba65918", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.872 [INFO][6065] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.872 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" iface="eth0" netns="" Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.872 [INFO][6065] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.872 [INFO][6065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.931 [INFO][6071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.931 [INFO][6071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.932 [INFO][6071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.953 [WARNING][6071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.953 [INFO][6071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.957 [INFO][6071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.962913 containerd[2027]: 2025-02-13 16:06:37.959 [INFO][6065] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:37.962913 containerd[2027]: time="2025-02-13T16:06:37.962735886Z" level=info msg="TearDown network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\" successfully" Feb 13 16:06:37.962913 containerd[2027]: time="2025-02-13T16:06:37.962774406Z" level=info msg="StopPodSandbox for \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\" returns successfully" Feb 13 16:06:37.964491 containerd[2027]: time="2025-02-13T16:06:37.964430610Z" level=info msg="RemovePodSandbox for \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\"" Feb 13 16:06:37.964609 containerd[2027]: time="2025-02-13T16:06:37.964491270Z" level=info msg="Forcibly stopping sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\"" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.103 [WARNING][6090] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a33f313d-041d-426a-9914-5ac8d0c42820", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"c15e08eb3477ee0d06cacb48335d435cf839db4355ab14fde6f5197c82c2d0ad", Pod:"csi-node-driver-t4668", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6c96ba65918", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.103 [INFO][6090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.103 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" iface="eth0" netns="" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.103 [INFO][6090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.103 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.226 [INFO][6103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.226 [INFO][6103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.226 [INFO][6103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.243 [WARNING][6103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.243 [INFO][6103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" HandleID="k8s-pod-network.e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Workload="ip--172--31--31--154-k8s-csi--node--driver--t4668-eth0" Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.248 [INFO][6103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:38.260207 containerd[2027]: 2025-02-13 16:06:38.255 [INFO][6090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291" Feb 13 16:06:38.260207 containerd[2027]: time="2025-02-13T16:06:38.259897623Z" level=info msg="TearDown network for sandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\" successfully" Feb 13 16:06:38.273515 containerd[2027]: time="2025-02-13T16:06:38.272344791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:38.273515 containerd[2027]: time="2025-02-13T16:06:38.272478555Z" level=info msg="RemovePodSandbox \"e77de6968dbffc0053e5ccc5021d4ccf63d65327eb96ec515ef47abab065f291\" returns successfully" Feb 13 16:06:38.274475 containerd[2027]: time="2025-02-13T16:06:38.274377531Z" level=info msg="StopPodSandbox for \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\"" Feb 13 16:06:38.296535 sshd[6043]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:38.309210 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:06:38.311131 systemd[1]: sshd@14-172.31.31.154:22-139.178.68.195:37026.service: Deactivated successfully. Feb 13 16:06:38.325145 systemd-logind[2001]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:06:38.329859 systemd-logind[2001]: Removed session 15. Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.433 [WARNING][6122] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"a192015d-967a-446f-82a0-27e3609df636", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3", Pod:"calico-apiserver-68484dd575-ncn9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77aa2e49be4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.433 [INFO][6122] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.433 [INFO][6122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" iface="eth0" netns="" Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.433 [INFO][6122] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.433 [INFO][6122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.513 [INFO][6131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.514 [INFO][6131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.514 [INFO][6131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.542 [WARNING][6131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.542 [INFO][6131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.544 [INFO][6131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:38.552162 containerd[2027]: 2025-02-13 16:06:38.548 [INFO][6122] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.553951 containerd[2027]: time="2025-02-13T16:06:38.552137585Z" level=info msg="TearDown network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\" successfully" Feb 13 16:06:38.553951 containerd[2027]: time="2025-02-13T16:06:38.552197957Z" level=info msg="StopPodSandbox for \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\" returns successfully" Feb 13 16:06:38.555844 containerd[2027]: time="2025-02-13T16:06:38.555667541Z" level=info msg="RemovePodSandbox for \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\"" Feb 13 16:06:38.556040 containerd[2027]: time="2025-02-13T16:06:38.555854681Z" level=info msg="Forcibly stopping sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\"" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.661 [WARNING][6149] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0", GenerateName:"calico-apiserver-68484dd575-", Namespace:"calico-apiserver", SelfLink:"", UID:"a192015d-967a-446f-82a0-27e3609df636", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68484dd575", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-154", ContainerID:"87588785f02831288526b2910fcd48b9ed900b535622c12dc728564500f9fcf3", Pod:"calico-apiserver-68484dd575-ncn9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77aa2e49be4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.662 [INFO][6149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.662 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" iface="eth0" netns="" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.662 [INFO][6149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.662 [INFO][6149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.729 [INFO][6155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.730 [INFO][6155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.730 [INFO][6155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.746 [WARNING][6155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.747 [INFO][6155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" HandleID="k8s-pod-network.248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Workload="ip--172--31--31--154-k8s-calico--apiserver--68484dd575--ncn9d-eth0" Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.754 [INFO][6155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:38.762679 containerd[2027]: 2025-02-13 16:06:38.758 [INFO][6149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996" Feb 13 16:06:38.765778 containerd[2027]: time="2025-02-13T16:06:38.762731982Z" level=info msg="TearDown network for sandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\" successfully" Feb 13 16:06:38.771186 containerd[2027]: time="2025-02-13T16:06:38.771088770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:38.771348 containerd[2027]: time="2025-02-13T16:06:38.771218622Z" level=info msg="RemovePodSandbox \"248d227e2a6f0908f9fddbc104ab8853c82574c65557b3c40ef8d2769c5fd996\" returns successfully" Feb 13 16:06:43.335533 systemd[1]: Started sshd@15-172.31.31.154:22-139.178.68.195:37042.service - OpenSSH per-connection server daemon (139.178.68.195:37042). Feb 13 16:06:43.516962 sshd[6171]: Accepted publickey for core from 139.178.68.195 port 37042 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:43.520491 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:43.528580 systemd-logind[2001]: New session 16 of user core. Feb 13 16:06:43.536281 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:06:43.793157 sshd[6171]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:43.800305 systemd[1]: sshd@15-172.31.31.154:22-139.178.68.195:37042.service: Deactivated successfully. Feb 13 16:06:43.804807 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:06:43.807980 systemd-logind[2001]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:06:43.810637 systemd-logind[2001]: Removed session 16. Feb 13 16:06:48.836561 systemd[1]: Started sshd@16-172.31.31.154:22-139.178.68.195:41632.service - OpenSSH per-connection server daemon (139.178.68.195:41632). Feb 13 16:06:49.016255 sshd[6189]: Accepted publickey for core from 139.178.68.195 port 41632 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:49.019079 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:49.027283 systemd-logind[2001]: New session 17 of user core. Feb 13 16:06:49.035570 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:06:49.316103 sshd[6189]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:49.322467 systemd[1]: sshd@16-172.31.31.154:22-139.178.68.195:41632.service: Deactivated successfully. Feb 13 16:06:49.328539 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:06:49.334752 systemd-logind[2001]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:06:49.336922 systemd-logind[2001]: Removed session 17. Feb 13 16:06:54.364819 systemd[1]: Started sshd@17-172.31.31.154:22-139.178.68.195:41644.service - OpenSSH per-connection server daemon (139.178.68.195:41644). Feb 13 16:06:54.550904 sshd[6222]: Accepted publickey for core from 139.178.68.195 port 41644 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:54.555037 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:54.564397 systemd-logind[2001]: New session 18 of user core. Feb 13 16:06:54.571274 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:06:54.844239 sshd[6222]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:54.850671 systemd[1]: sshd@17-172.31.31.154:22-139.178.68.195:41644.service: Deactivated successfully. Feb 13 16:06:54.855518 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:06:54.861900 systemd-logind[2001]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:06:54.864275 systemd-logind[2001]: Removed session 18. Feb 13 16:06:54.887741 systemd[1]: Started sshd@18-172.31.31.154:22-139.178.68.195:41652.service - OpenSSH per-connection server daemon (139.178.68.195:41652). Feb 13 16:06:55.074774 sshd[6235]: Accepted publickey for core from 139.178.68.195 port 41652 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:55.078068 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:55.087053 systemd-logind[2001]: New session 19 of user core. Feb 13 16:06:55.092348 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:06:55.649560 sshd[6235]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:55.660195 systemd[1]: sshd@18-172.31.31.154:22-139.178.68.195:41652.service: Deactivated successfully. Feb 13 16:06:55.668332 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:06:55.674543 systemd-logind[2001]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:06:55.695643 systemd[1]: Started sshd@19-172.31.31.154:22-139.178.68.195:41666.service - OpenSSH per-connection server daemon (139.178.68.195:41666). Feb 13 16:06:55.697920 systemd-logind[2001]: Removed session 19. Feb 13 16:06:55.876635 sshd[6246]: Accepted publickey for core from 139.178.68.195 port 41666 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:55.879765 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:55.890620 systemd-logind[2001]: New session 20 of user core. Feb 13 16:06:55.896345 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:06:59.126788 sshd[6246]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:59.136117 systemd[1]: sshd@19-172.31.31.154:22-139.178.68.195:41666.service: Deactivated successfully. Feb 13 16:06:59.145851 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:06:59.147929 systemd[1]: session-20.scope: Consumed 1.076s CPU time. Feb 13 16:06:59.151293 systemd-logind[2001]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:06:59.192212 systemd[1]: Started sshd@20-172.31.31.154:22-139.178.68.195:44694.service - OpenSSH per-connection server daemon (139.178.68.195:44694). Feb 13 16:06:59.194765 systemd-logind[2001]: Removed session 20. Feb 13 16:06:59.364947 sshd[6270]: Accepted publickey for core from 139.178.68.195 port 44694 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:59.367800 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:59.376376 systemd-logind[2001]: New session 21 of user core. Feb 13 16:06:59.384334 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:06:59.925838 sshd[6270]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:59.931323 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:06:59.933105 systemd[1]: sshd@20-172.31.31.154:22-139.178.68.195:44694.service: Deactivated successfully. Feb 13 16:06:59.944183 systemd-logind[2001]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:06:59.972493 systemd[1]: Started sshd@21-172.31.31.154:22-139.178.68.195:44698.service - OpenSSH per-connection server daemon (139.178.68.195:44698). Feb 13 16:06:59.975087 systemd-logind[2001]: Removed session 21. Feb 13 16:07:00.136950 sshd[6281]: Accepted publickey for core from 139.178.68.195 port 44698 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:00.140137 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:00.148531 systemd-logind[2001]: New session 22 of user core. Feb 13 16:07:00.155369 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:07:00.394415 sshd[6281]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:00.400277 systemd[1]: sshd@21-172.31.31.154:22-139.178.68.195:44698.service: Deactivated successfully. Feb 13 16:07:00.404965 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:07:00.410059 systemd-logind[2001]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:07:00.412467 systemd-logind[2001]: Removed session 22. Feb 13 16:07:05.437561 systemd[1]: Started sshd@22-172.31.31.154:22-139.178.68.195:44704.service - OpenSSH per-connection server daemon (139.178.68.195:44704). Feb 13 16:07:05.619325 sshd[6313]: Accepted publickey for core from 139.178.68.195 port 44704 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:05.622531 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:05.631459 systemd-logind[2001]: New session 23 of user core. Feb 13 16:07:05.637262 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:07:05.891059 sshd[6313]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:05.895873 systemd[1]: sshd@22-172.31.31.154:22-139.178.68.195:44704.service: Deactivated successfully. Feb 13 16:07:05.901672 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:07:05.906262 systemd-logind[2001]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:07:05.907825 systemd-logind[2001]: Removed session 23. Feb 13 16:07:10.931602 systemd[1]: Started sshd@23-172.31.31.154:22-139.178.68.195:42540.service - OpenSSH per-connection server daemon (139.178.68.195:42540). Feb 13 16:07:11.126088 sshd[6328]: Accepted publickey for core from 139.178.68.195 port 42540 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:11.126828 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:11.141796 systemd-logind[2001]: New session 24 of user core. Feb 13 16:07:11.160249 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:07:11.490115 sshd[6328]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:11.498282 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:07:11.503593 systemd[1]: sshd@23-172.31.31.154:22-139.178.68.195:42540.service: Deactivated successfully. Feb 13 16:07:11.511950 systemd-logind[2001]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:07:11.514865 systemd-logind[2001]: Removed session 24. Feb 13 16:07:16.529693 systemd[1]: Started sshd@24-172.31.31.154:22-139.178.68.195:51764.service - OpenSSH per-connection server daemon (139.178.68.195:51764). Feb 13 16:07:16.722507 sshd[6343]: Accepted publickey for core from 139.178.68.195 port 51764 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:16.729131 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:16.742912 systemd-logind[2001]: New session 25 of user core. Feb 13 16:07:16.749274 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:07:17.006296 sshd[6343]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:17.013239 systemd[1]: sshd@24-172.31.31.154:22-139.178.68.195:51764.service: Deactivated successfully. Feb 13 16:07:17.019480 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:07:17.021629 systemd-logind[2001]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:07:17.023466 systemd-logind[2001]: Removed session 25. Feb 13 16:07:22.043601 systemd[1]: Started sshd@25-172.31.31.154:22-139.178.68.195:51778.service - OpenSSH per-connection server daemon (139.178.68.195:51778). Feb 13 16:07:22.222788 sshd[6397]: Accepted publickey for core from 139.178.68.195 port 51778 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:22.225649 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:22.234789 systemd-logind[2001]: New session 26 of user core. Feb 13 16:07:22.240267 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:07:22.483707 sshd[6397]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:22.491109 systemd[1]: sshd@25-172.31.31.154:22-139.178.68.195:51778.service: Deactivated successfully. Feb 13 16:07:22.495169 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:07:22.497684 systemd-logind[2001]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:07:22.500238 systemd-logind[2001]: Removed session 26. Feb 13 16:07:27.523515 systemd[1]: Started sshd@26-172.31.31.154:22-139.178.68.195:37796.service - OpenSSH per-connection server daemon (139.178.68.195:37796). Feb 13 16:07:27.707106 sshd[6409]: Accepted publickey for core from 139.178.68.195 port 37796 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:27.709941 sshd[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:27.718120 systemd-logind[2001]: New session 27 of user core. Feb 13 16:07:27.723282 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 16:07:27.976828 sshd[6409]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:27.984583 systemd[1]: sshd@26-172.31.31.154:22-139.178.68.195:37796.service: Deactivated successfully. Feb 13 16:07:27.989614 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 16:07:27.993360 systemd-logind[2001]: Session 27 logged out. Waiting for processes to exit. Feb 13 16:07:27.996187 systemd-logind[2001]: Removed session 27. Feb 13 16:07:33.019015 systemd[1]: Started sshd@27-172.31.31.154:22-139.178.68.195:37812.service - OpenSSH per-connection server daemon (139.178.68.195:37812). Feb 13 16:07:33.203256 sshd[6441]: Accepted publickey for core from 139.178.68.195 port 37812 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:33.206221 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:33.214419 systemd-logind[2001]: New session 28 of user core. Feb 13 16:07:33.224473 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 16:07:33.465431 sshd[6441]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:33.472819 systemd[1]: sshd@27-172.31.31.154:22-139.178.68.195:37812.service: Deactivated successfully. Feb 13 16:07:33.476873 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 16:07:33.479699 systemd-logind[2001]: Session 28 logged out. Waiting for processes to exit. Feb 13 16:07:33.481912 systemd-logind[2001]: Removed session 28. Feb 13 16:07:47.098263 systemd[1]: cri-containerd-27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f.scope: Deactivated successfully. Feb 13 16:07:47.098842 systemd[1]: cri-containerd-27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f.scope: Consumed 6.744s CPU time. Feb 13 16:07:47.140308 containerd[2027]: time="2025-02-13T16:07:47.140202609Z" level=info msg="shim disconnected" id=27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f namespace=k8s.io Feb 13 16:07:47.140308 containerd[2027]: time="2025-02-13T16:07:47.140293689Z" level=warning msg="cleaning up after shim disconnected" id=27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f namespace=k8s.io Feb 13 16:07:47.141033 containerd[2027]: time="2025-02-13T16:07:47.140317965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:47.146560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f-rootfs.mount: Deactivated successfully. Feb 13 16:07:47.564886 kubelet[3418]: I0213 16:07:47.564816 3418 scope.go:117] "RemoveContainer" containerID="27d0bb9fd59dcaf0ddfc5f8a9a90918531310661a3ea970d2c835df19d6cb85f" Feb 13 16:07:47.569423 containerd[2027]: time="2025-02-13T16:07:47.568927427Z" level=info msg="CreateContainer within sandbox \"9b9bc12aca6302de5963353b16388f8e651c780a46f18c3a2e5e5c7b805f577e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 13 16:07:47.595198 containerd[2027]: time="2025-02-13T16:07:47.594843851Z" level=info msg="CreateContainer within sandbox \"9b9bc12aca6302de5963353b16388f8e651c780a46f18c3a2e5e5c7b805f577e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bdfa1d630c4beafe9c3306382135f7091f1d4b5d41021a3cd7f41fbac3d01c54\"" Feb 13 16:07:47.595949 containerd[2027]: time="2025-02-13T16:07:47.595816511Z" level=info msg="StartContainer for \"bdfa1d630c4beafe9c3306382135f7091f1d4b5d41021a3cd7f41fbac3d01c54\"" Feb 13 16:07:47.663475 systemd[1]: Started cri-containerd-bdfa1d630c4beafe9c3306382135f7091f1d4b5d41021a3cd7f41fbac3d01c54.scope - libcontainer container bdfa1d630c4beafe9c3306382135f7091f1d4b5d41021a3cd7f41fbac3d01c54. Feb 13 16:07:47.703242 systemd[1]: cri-containerd-3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b.scope: Deactivated successfully. Feb 13 16:07:47.703732 systemd[1]: cri-containerd-3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b.scope: Consumed 5.645s CPU time, 20.1M memory peak, 0B memory swap peak. Feb 13 16:07:47.766787 containerd[2027]: time="2025-02-13T16:07:47.766549932Z" level=info msg="StartContainer for \"bdfa1d630c4beafe9c3306382135f7091f1d4b5d41021a3cd7f41fbac3d01c54\" returns successfully" Feb 13 16:07:47.783696 containerd[2027]: time="2025-02-13T16:07:47.783551880Z" level=info msg="shim disconnected" id=3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b namespace=k8s.io Feb 13 16:07:47.784162 containerd[2027]: time="2025-02-13T16:07:47.783797436Z" level=warning msg="cleaning up after shim disconnected" id=3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b namespace=k8s.io Feb 13 16:07:47.784162 containerd[2027]: time="2025-02-13T16:07:47.783822192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:47.794313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b-rootfs.mount: Deactivated successfully. Feb 13 16:07:48.570508 kubelet[3418]: I0213 16:07:48.570051 3418 scope.go:117] "RemoveContainer" containerID="3c657585fb92629bfdfebebf64b59c2498ff4d1b0c5c87d01c48ea19fa32ae1b" Feb 13 16:07:48.574657 containerd[2027]: time="2025-02-13T16:07:48.574570608Z" level=info msg="CreateContainer within sandbox \"16d0378269b47b4a99eae2df83c5b68d22102d768cb351f59f7bb188fd8fa28a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 16:07:48.603227 containerd[2027]: time="2025-02-13T16:07:48.603143785Z" level=info msg="CreateContainer within sandbox \"16d0378269b47b4a99eae2df83c5b68d22102d768cb351f59f7bb188fd8fa28a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b4bf89445658ed862786a7a9bf2ff7086078eecee38533cb064b4e703f1537ec\"" Feb 13 16:07:48.604071 containerd[2027]: time="2025-02-13T16:07:48.603962869Z" level=info msg="StartContainer for \"b4bf89445658ed862786a7a9bf2ff7086078eecee38533cb064b4e703f1537ec\"" Feb 13 16:07:48.666762 systemd[1]: run-containerd-runc-k8s.io-b4bf89445658ed862786a7a9bf2ff7086078eecee38533cb064b4e703f1537ec-runc.7ZXgIA.mount: Deactivated successfully. Feb 13 16:07:48.680393 systemd[1]: Started cri-containerd-b4bf89445658ed862786a7a9bf2ff7086078eecee38533cb064b4e703f1537ec.scope - libcontainer container b4bf89445658ed862786a7a9bf2ff7086078eecee38533cb064b4e703f1537ec. Feb 13 16:07:48.750074 containerd[2027]: time="2025-02-13T16:07:48.749968129Z" level=info msg="StartContainer for \"b4bf89445658ed862786a7a9bf2ff7086078eecee38533cb064b4e703f1537ec\" returns successfully" Feb 13 16:07:49.308238 kubelet[3418]: E0213 16:07:49.307799 3418 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-154?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 16:07:52.897867 systemd[1]: cri-containerd-62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9.scope: Deactivated successfully. Feb 13 16:07:52.898381 systemd[1]: cri-containerd-62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9.scope: Consumed 2.559s CPU time, 15.6M memory peak, 0B memory swap peak. Feb 13 16:07:52.941311 containerd[2027]: time="2025-02-13T16:07:52.940414134Z" level=info msg="shim disconnected" id=62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9 namespace=k8s.io Feb 13 16:07:52.941311 containerd[2027]: time="2025-02-13T16:07:52.940492506Z" level=warning msg="cleaning up after shim disconnected" id=62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9 namespace=k8s.io Feb 13 16:07:52.941311 containerd[2027]: time="2025-02-13T16:07:52.940515510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:52.948383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9-rootfs.mount: Deactivated successfully. Feb 13 16:07:52.968549 containerd[2027]: time="2025-02-13T16:07:52.968333622Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:07:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:07:53.598809 kubelet[3418]: I0213 16:07:53.598381 3418 scope.go:117] "RemoveContainer" containerID="62c35bf75b8533d5c2b5b7e08f1d5e445f9d1522d9f6c8cc2215788ff9be1be9" Feb 13 16:07:53.602116 containerd[2027]: time="2025-02-13T16:07:53.602044913Z" level=info msg="CreateContainer within sandbox \"60db5c9cf74c48a4bc1ab67e36ccf95cbd4f1889720255591be3a23120edd7f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 16:07:53.630493 containerd[2027]: time="2025-02-13T16:07:53.630353585Z" level=info msg="CreateContainer within sandbox \"60db5c9cf74c48a4bc1ab67e36ccf95cbd4f1889720255591be3a23120edd7f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"18b084dec7d2b335858417a5432d2adfcb5eb977fc6caacdaad84459b488e0b8\"" Feb 13 16:07:53.630934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684090764.mount: Deactivated successfully. Feb 13 16:07:53.631262 containerd[2027]: time="2025-02-13T16:07:53.631053725Z" level=info msg="StartContainer for \"18b084dec7d2b335858417a5432d2adfcb5eb977fc6caacdaad84459b488e0b8\"" Feb 13 16:07:53.695295 systemd[1]: Started cri-containerd-18b084dec7d2b335858417a5432d2adfcb5eb977fc6caacdaad84459b488e0b8.scope - libcontainer container 18b084dec7d2b335858417a5432d2adfcb5eb977fc6caacdaad84459b488e0b8. Feb 13 16:07:53.763597 containerd[2027]: time="2025-02-13T16:07:53.763400646Z" level=info msg="StartContainer for \"18b084dec7d2b335858417a5432d2adfcb5eb977fc6caacdaad84459b488e0b8\" returns successfully" Feb 13 16:07:59.308681 kubelet[3418]: E0213 16:07:59.308598 3418 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-154?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"