Feb 13 16:05:43.238251 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 16:05:43.238305 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:34:20 -00 2025 Feb 13 16:05:43.238331 kernel: KASLR disabled due to lack of seed Feb 13 16:05:43.238348 kernel: efi: EFI v2.7 by EDK II Feb 13 16:05:43.238364 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 16:05:43.238380 kernel: ACPI: Early table checksum verification disabled Feb 13 16:05:43.238399 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 16:05:43.238416 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 16:05:43.238433 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 16:05:43.238449 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 16:05:43.238470 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 16:05:43.238486 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 16:05:43.238502 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 16:05:43.238518 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 16:05:43.238537 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 16:05:43.238560 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 16:05:43.238578 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 16:05:43.238595 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 16:05:43.238611 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 16:05:43.238628 kernel: printk: bootconsole [uart0] enabled Feb 13 16:05:43.238644 kernel: NUMA: Failed to initialise from firmware Feb 13 16:05:43.238661 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:05:43.238678 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 16:05:43.238694 kernel: Zone ranges: Feb 13 16:05:43.238712 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 16:05:43.238728 kernel: DMA32 empty Feb 13 16:05:43.238749 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 16:05:43.238766 kernel: Movable zone start for each node Feb 13 16:05:43.238783 kernel: Early memory node ranges Feb 13 16:05:43.238800 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 16:05:43.238818 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 16:05:43.238836 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 16:05:43.238853 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 16:05:43.238870 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 16:05:43.238887 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 16:05:43.238904 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 16:05:43.238922 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 16:05:43.238938 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:05:43.238960 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 16:05:43.238977 kernel: psci: probing for conduit method from ACPI. Feb 13 16:05:43.239002 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 16:05:43.239020 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 16:05:43.239108 kernel: psci: Trusted OS migration not required Feb 13 16:05:43.239142 kernel: psci: SMC Calling Convention v1.1 Feb 13 16:05:43.239163 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 16:05:43.239181 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 16:05:43.239200 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 16:05:43.239217 kernel: Detected PIPT I-cache on CPU0 Feb 13 16:05:43.239235 kernel: CPU features: detected: GIC system register CPU interface Feb 13 16:05:43.239270 kernel: CPU features: detected: Spectre-v2 Feb 13 16:05:43.239290 kernel: CPU features: detected: Spectre-v3a Feb 13 16:05:43.239308 kernel: CPU features: detected: Spectre-BHB Feb 13 16:05:43.239326 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 16:05:43.239344 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 16:05:43.239367 kernel: alternatives: applying boot alternatives Feb 13 16:05:43.239388 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:05:43.239407 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:05:43.239425 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:05:43.239442 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:05:43.239460 kernel: Fallback order for Node 0: 0 Feb 13 16:05:43.239478 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 16:05:43.239495 kernel: Policy zone: Normal Feb 13 16:05:43.239512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:05:43.239530 kernel: software IO TLB: area num 2. Feb 13 16:05:43.239547 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 16:05:43.239572 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 16:05:43.239590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:05:43.239607 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:05:43.239626 kernel: rcu: RCU event tracing is enabled. Feb 13 16:05:43.239644 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:05:43.239662 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:05:43.239680 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:05:43.239697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:05:43.239715 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:05:43.239733 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 16:05:43.239750 kernel: GICv3: 96 SPIs implemented Feb 13 16:05:43.239773 kernel: GICv3: 0 Extended SPIs implemented Feb 13 16:05:43.239792 kernel: Root IRQ handler: gic_handle_irq Feb 13 16:05:43.239809 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 16:05:43.239827 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 16:05:43.239844 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 16:05:43.239862 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 16:05:43.239880 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 16:05:43.239897 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 16:05:43.239914 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 16:05:43.239932 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 16:05:43.239950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:05:43.239967 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 16:05:43.239990 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 16:05:43.240008 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 16:05:43.240026 kernel: Console: colour dummy device 80x25 Feb 13 16:05:43.240067 kernel: printk: console [tty1] enabled Feb 13 16:05:43.240089 kernel: ACPI: Core revision 20230628 Feb 13 16:05:43.240108 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 16:05:43.240126 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:05:43.240144 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:05:43.240162 kernel: landlock: Up and running. Feb 13 16:05:43.240187 kernel: SELinux: Initializing. Feb 13 16:05:43.240206 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:05:43.240223 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:05:43.240241 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:05:43.240260 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:05:43.240277 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:05:43.240295 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:05:43.240313 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 16:05:43.240332 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 16:05:43.240356 kernel: Remapping and enabling EFI services. Feb 13 16:05:43.240374 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:05:43.240392 kernel: Detected PIPT I-cache on CPU1 Feb 13 16:05:43.240410 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 16:05:43.240429 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 16:05:43.240446 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 16:05:43.240465 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:05:43.240483 kernel: SMP: Total of 2 processors activated. Feb 13 16:05:43.240501 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 16:05:43.240523 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 16:05:43.240541 kernel: CPU features: detected: CRC32 instructions Feb 13 16:05:43.240559 kernel: CPU: All CPU(s) started at EL1 Feb 13 16:05:43.240592 kernel: alternatives: applying system-wide alternatives Feb 13 16:05:43.240617 kernel: devtmpfs: initialized Feb 13 16:05:43.240636 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:05:43.240655 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:05:43.240674 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:05:43.240693 kernel: SMBIOS 3.0.0 present. Feb 13 16:05:43.240712 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 16:05:43.240737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:05:43.240756 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 16:05:43.240775 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 16:05:43.240794 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 16:05:43.240813 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:05:43.240832 kernel: audit: type=2000 audit(0.297:1): state=initialized audit_enabled=0 res=1 Feb 13 16:05:43.240850 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:05:43.240874 kernel: cpuidle: using governor menu Feb 13 16:05:43.240893 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 16:05:43.240911 kernel: ASID allocator initialised with 65536 entries Feb 13 16:05:43.240931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:05:43.240949 kernel: Serial: AMBA PL011 UART driver Feb 13 16:05:43.240967 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 16:05:43.240986 kernel: Modules: 509040 pages in range for PLT usage Feb 13 16:05:43.241004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:05:43.241024 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:05:43.242130 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 16:05:43.242163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 16:05:43.242183 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:05:43.242202 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:05:43.242221 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 16:05:43.242240 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 16:05:43.242259 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:05:43.242278 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:05:43.242297 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:05:43.242325 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:05:43.242344 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:05:43.242363 kernel: ACPI: Interpreter enabled Feb 13 16:05:43.242382 kernel: ACPI: Using GIC for interrupt routing Feb 13 16:05:43.242401 kernel: ACPI: MCFG table detected, 1 entries Feb 13 16:05:43.242420 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 16:05:43.242837 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:05:43.243108 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 16:05:43.249233 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 16:05:43.249563 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 16:05:43.249778 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 16:05:43.249823 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 16:05:43.249844 kernel: acpiphp: Slot [1] registered Feb 13 16:05:43.249863 kernel: acpiphp: Slot [2] registered Feb 13 16:05:43.249882 kernel: acpiphp: Slot [3] registered Feb 13 16:05:43.249901 kernel: acpiphp: Slot [4] registered Feb 13 16:05:43.249935 kernel: acpiphp: Slot [5] registered Feb 13 16:05:43.249955 kernel: acpiphp: Slot [6] registered Feb 13 16:05:43.249974 kernel: acpiphp: Slot [7] registered Feb 13 16:05:43.249993 kernel: acpiphp: Slot [8] registered Feb 13 16:05:43.250012 kernel: acpiphp: Slot [9] registered Feb 13 16:05:43.250032 kernel: acpiphp: Slot [10] registered Feb 13 16:05:43.252075 kernel: acpiphp: Slot [11] registered Feb 13 16:05:43.252103 kernel: acpiphp: Slot [12] registered Feb 13 16:05:43.252122 kernel: acpiphp: Slot [13] registered Feb 13 16:05:43.252141 kernel: acpiphp: Slot [14] registered Feb 13 16:05:43.252170 kernel: acpiphp: Slot [15] registered Feb 13 16:05:43.252189 kernel: acpiphp: Slot [16] registered Feb 13 16:05:43.252207 kernel: acpiphp: Slot [17] registered Feb 13 16:05:43.252226 kernel: acpiphp: Slot [18] registered Feb 13 16:05:43.252244 kernel: acpiphp: Slot [19] registered Feb 13 16:05:43.252263 kernel: acpiphp: Slot [20] registered Feb 13 16:05:43.252281 kernel: acpiphp: Slot [21] registered Feb 13 16:05:43.252300 kernel: acpiphp: Slot [22] registered Feb 13 16:05:43.252318 kernel: acpiphp: Slot [23] registered Feb 13 16:05:43.252342 kernel: acpiphp: Slot [24] registered Feb 13 16:05:43.252361 kernel: acpiphp: Slot [25] registered Feb 13 16:05:43.252379 kernel: acpiphp: Slot [26] registered Feb 13 16:05:43.252398 kernel: acpiphp: Slot [27] registered Feb 13 16:05:43.252416 kernel: acpiphp: Slot [28] registered Feb 13 16:05:43.252434 kernel: acpiphp: Slot [29] registered Feb 13 16:05:43.252453 kernel: acpiphp: Slot [30] registered Feb 13 16:05:43.252472 kernel: acpiphp: Slot [31] registered Feb 13 16:05:43.252491 kernel: PCI host bridge to bus 0000:00 Feb 13 16:05:43.252749 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 16:05:43.252942 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 16:05:43.253158 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 16:05:43.253347 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 16:05:43.253618 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 16:05:43.253882 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 16:05:43.254133 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 16:05:43.254383 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 16:05:43.254593 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 16:05:43.254802 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:05:43.255023 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 16:05:43.257410 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 16:05:43.257638 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 16:05:43.257856 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 16:05:43.258884 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:05:43.259152 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 16:05:43.259511 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 16:05:43.259780 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 16:05:43.259993 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 16:05:43.260316 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 16:05:43.260523 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 16:05:43.260703 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 16:05:43.260890 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 16:05:43.260916 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 16:05:43.260936 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 16:05:43.260956 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 16:05:43.260975 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 16:05:43.260993 kernel: iommu: Default domain type: Translated Feb 13 16:05:43.261012 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 16:05:43.261079 kernel: efivars: Registered efivars operations Feb 13 16:05:43.261105 kernel: vgaarb: loaded Feb 13 16:05:43.261125 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 16:05:43.261144 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:05:43.261162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:05:43.261182 kernel: pnp: PnP ACPI init Feb 13 16:05:43.261405 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 16:05:43.261448 kernel: pnp: PnP ACPI: found 1 devices Feb 13 16:05:43.261474 kernel: NET: Registered PF_INET protocol family Feb 13 16:05:43.261494 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:05:43.261513 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 16:05:43.261532 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:05:43.261551 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:05:43.261570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 16:05:43.261589 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 16:05:43.261608 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:05:43.261627 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:05:43.261652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:05:43.261671 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:05:43.261690 kernel: kvm [1]: HYP mode not available Feb 13 16:05:43.261709 kernel: Initialise system trusted keyrings Feb 13 16:05:43.261729 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 16:05:43.261748 kernel: Key type asymmetric registered Feb 13 16:05:43.261766 kernel: Asymmetric key parser 'x509' registered Feb 13 16:05:43.261786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 16:05:43.261804 kernel: io scheduler mq-deadline registered Feb 13 16:05:43.261829 kernel: io scheduler kyber registered Feb 13 16:05:43.261848 kernel: io scheduler bfq registered Feb 13 16:05:43.262109 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 16:05:43.262139 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 16:05:43.262160 kernel: ACPI: button: Power Button [PWRB] Feb 13 16:05:43.262180 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 16:05:43.262199 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 16:05:43.262218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:05:43.262250 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 16:05:43.262465 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 16:05:43.262492 kernel: printk: console [ttyS0] disabled Feb 13 16:05:43.262512 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 16:05:43.262531 kernel: printk: console [ttyS0] enabled Feb 13 16:05:43.262550 kernel: printk: bootconsole [uart0] disabled Feb 13 16:05:43.262569 kernel: thunder_xcv, ver 1.0 Feb 13 16:05:43.262587 kernel: thunder_bgx, ver 1.0 Feb 13 16:05:43.262605 kernel: nicpf, ver 1.0 Feb 13 16:05:43.262630 kernel: nicvf, ver 1.0 Feb 13 16:05:43.262836 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 16:05:43.263026 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T16:05:42 UTC (1739462742) Feb 13 16:05:43.265124 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:05:43.265155 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 16:05:43.265177 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 16:05:43.265198 kernel: watchdog: Hard watchdog permanently disabled Feb 13 16:05:43.267138 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:05:43.267173 kernel: Segment Routing with IPv6 Feb 13 16:05:43.267193 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:05:43.267211 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:05:43.267230 kernel: Key type dns_resolver registered Feb 13 16:05:43.267267 kernel: registered taskstats version 1 Feb 13 16:05:43.267290 kernel: Loading compiled-in X.509 certificates Feb 13 16:05:43.267309 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: d3f151cc07005f6a29244b13ac54c8677429c8f5' Feb 13 16:05:43.267328 kernel: Key type .fscrypt registered Feb 13 16:05:43.267346 kernel: Key type fscrypt-provisioning registered Feb 13 16:05:43.267371 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:05:43.267390 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:05:43.267409 kernel: ima: No architecture policies found Feb 13 16:05:43.267428 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 16:05:43.267447 kernel: clk: Disabling unused clocks Feb 13 16:05:43.267465 kernel: Freeing unused kernel memory: 39360K Feb 13 16:05:43.267484 kernel: Run /init as init process Feb 13 16:05:43.267503 kernel: with arguments: Feb 13 16:05:43.267521 kernel: /init Feb 13 16:05:43.267539 kernel: with environment: Feb 13 16:05:43.267563 kernel: HOME=/ Feb 13 16:05:43.267582 kernel: TERM=linux Feb 13 16:05:43.267600 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:05:43.267624 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:05:43.267649 systemd[1]: Detected virtualization amazon. Feb 13 16:05:43.267670 systemd[1]: Detected architecture arm64. Feb 13 16:05:43.267690 systemd[1]: Running in initrd. Feb 13 16:05:43.267715 systemd[1]: No hostname configured, using default hostname. Feb 13 16:05:43.267735 systemd[1]: Hostname set to . Feb 13 16:05:43.267757 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:05:43.267777 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:05:43.267797 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:43.267818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:43.267840 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:05:43.267861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:05:43.267886 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:05:43.267908 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:05:43.267932 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:05:43.267953 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:05:43.267974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:43.267994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:43.268015 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:05:43.268067 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:05:43.268094 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:05:43.268115 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:05:43.268136 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:05:43.268157 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:05:43.268177 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:05:43.268198 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:05:43.268219 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:43.268239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:43.268267 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:43.268287 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:05:43.268308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:05:43.268328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:05:43.268349 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:05:43.268369 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:05:43.268390 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:05:43.268410 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:05:43.268436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:43.268458 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:05:43.268479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:43.268499 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:05:43.268573 systemd-journald[250]: Collecting audit messages is disabled. Feb 13 16:05:43.268626 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:05:43.268647 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:05:43.268667 kernel: Bridge firewalling registered Feb 13 16:05:43.268692 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:43.268714 systemd-journald[250]: Journal started Feb 13 16:05:43.268753 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2e593eb141d686ada3a20679dadf74) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:05:43.219904 systemd-modules-load[251]: Inserted module 'overlay' Feb 13 16:05:43.259321 systemd-modules-load[251]: Inserted module 'br_netfilter' Feb 13 16:05:43.281356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:43.288089 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:05:43.288964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:43.293532 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:43.307381 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:43.314329 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:05:43.330401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:05:43.336553 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:43.364821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:43.374641 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:43.395186 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:43.413468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:05:43.420450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:05:43.441993 dracut-cmdline[286]: dracut-dracut-053 Feb 13 16:05:43.449288 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:05:43.510023 systemd-resolved[289]: Positive Trust Anchors: Feb 13 16:05:43.510085 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:05:43.510149 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:05:43.616082 kernel: SCSI subsystem initialized Feb 13 16:05:43.623172 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:05:43.636370 kernel: iscsi: registered transport (tcp) Feb 13 16:05:43.659742 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:05:43.659828 kernel: QLogic iSCSI HBA Driver Feb 13 16:05:43.740093 kernel: random: crng init done Feb 13 16:05:43.740702 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 16:05:43.745575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:05:43.748335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:43.778444 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:05:43.793954 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:05:43.829548 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:05:43.829656 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:05:43.829693 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:05:43.909089 kernel: raid6: neonx8 gen() 6695 MB/s Feb 13 16:05:43.926077 kernel: raid6: neonx4 gen() 6496 MB/s Feb 13 16:05:43.943077 kernel: raid6: neonx2 gen() 5424 MB/s Feb 13 16:05:43.960076 kernel: raid6: neonx1 gen() 3930 MB/s Feb 13 16:05:43.977073 kernel: raid6: int64x8 gen() 3810 MB/s Feb 13 16:05:43.994080 kernel: raid6: int64x4 gen() 3679 MB/s Feb 13 16:05:44.011075 kernel: raid6: int64x2 gen() 3578 MB/s Feb 13 16:05:44.028840 kernel: raid6: int64x1 gen() 2759 MB/s Feb 13 16:05:44.028881 kernel: raid6: using algorithm neonx8 gen() 6695 MB/s Feb 13 16:05:44.046812 kernel: raid6: .... xor() 4854 MB/s, rmw enabled Feb 13 16:05:44.046861 kernel: raid6: using neon recovery algorithm Feb 13 16:05:44.055407 kernel: xor: measuring software checksum speed Feb 13 16:05:44.055476 kernel: 8regs : 11011 MB/sec Feb 13 16:05:44.056505 kernel: 32regs : 11944 MB/sec Feb 13 16:05:44.057711 kernel: arm64_neon : 9563 MB/sec Feb 13 16:05:44.057745 kernel: xor: using function: 32regs (11944 MB/sec) Feb 13 16:05:44.144126 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:05:44.165070 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:05:44.173394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:44.217211 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 16:05:44.226923 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:44.238315 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:05:44.274589 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Feb 13 16:05:44.332861 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:05:44.350295 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:05:44.476168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:44.488465 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:05:44.546932 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:05:44.553975 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:05:44.562690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:44.566055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:05:44.589314 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:05:44.632530 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:05:44.741733 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 16:05:44.741819 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 16:05:44.770358 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 16:05:44.770639 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 16:05:44.770872 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 16:05:44.770902 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 16:05:44.771189 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:20:e0:e1:bc:63 Feb 13 16:05:44.771550 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 16:05:44.754025 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:05:44.754366 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:44.757227 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:44.759683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:05:44.760121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:44.762567 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:44.791377 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:05:44.791421 kernel: GPT:9289727 != 16777215 Feb 13 16:05:44.784754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:44.794780 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:05:44.798753 kernel: GPT:9289727 != 16777215 Feb 13 16:05:44.798826 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:05:44.800731 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:44.804187 (udev-worker)[517]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:05:44.832565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:44.842469 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:44.897490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:44.926159 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (541) Feb 13 16:05:44.936093 kernel: BTRFS: device fsid 39fc2625-8d65-490f-9a1f-39e365051e19 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (514) Feb 13 16:05:45.011656 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 16:05:45.032679 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 16:05:45.077737 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:05:45.091368 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 16:05:45.093956 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 16:05:45.120745 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:05:45.136112 disk-uuid[660]: Primary Header is updated. Feb 13 16:05:45.136112 disk-uuid[660]: Secondary Entries is updated. Feb 13 16:05:45.136112 disk-uuid[660]: Secondary Header is updated. Feb 13 16:05:45.147087 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:45.158083 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:46.170248 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:46.172482 disk-uuid[661]: The operation has completed successfully. Feb 13 16:05:46.370788 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:05:46.375115 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:05:46.427318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:05:46.435625 sh[919]: Success Feb 13 16:05:46.454499 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 16:05:46.542225 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:05:46.560308 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:05:46.567357 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:05:46.610548 kernel: BTRFS info (device dm-0): first mount of filesystem 39fc2625-8d65-490f-9a1f-39e365051e19 Feb 13 16:05:46.610627 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:46.610655 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:05:46.612339 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:05:46.613622 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:05:46.745089 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 16:05:46.767417 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:05:46.771492 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:05:46.784348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:05:46.791538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:05:46.822094 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:46.822189 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:46.823555 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:46.831090 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:46.852021 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:05:46.857084 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:46.869752 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:05:46.882439 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:05:46.992335 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:05:47.005346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:05:47.058752 systemd-networkd[1111]: lo: Link UP Feb 13 16:05:47.058775 systemd-networkd[1111]: lo: Gained carrier Feb 13 16:05:47.063477 systemd-networkd[1111]: Enumeration completed Feb 13 16:05:47.064222 systemd-networkd[1111]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:47.064228 systemd-networkd[1111]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:05:47.065809 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:05:47.078001 systemd-networkd[1111]: eth0: Link UP Feb 13 16:05:47.078016 systemd-networkd[1111]: eth0: Gained carrier Feb 13 16:05:47.078032 systemd[1]: Reached target network.target - Network. Feb 13 16:05:47.078034 systemd-networkd[1111]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:47.102129 systemd-networkd[1111]: eth0: DHCPv4 address 172.31.18.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:05:47.284250 ignition[1024]: Ignition 2.19.0 Feb 13 16:05:47.284281 ignition[1024]: Stage: fetch-offline Feb 13 16:05:47.285936 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:47.285983 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:47.289319 ignition[1024]: Ignition finished successfully Feb 13 16:05:47.295182 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:05:47.307446 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:05:47.340754 ignition[1120]: Ignition 2.19.0 Feb 13 16:05:47.340785 ignition[1120]: Stage: fetch Feb 13 16:05:47.341888 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:47.341914 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:47.346265 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:47.368251 ignition[1120]: PUT result: OK Feb 13 16:05:47.371193 ignition[1120]: parsed url from cmdline: "" Feb 13 16:05:47.371209 ignition[1120]: no config URL provided Feb 13 16:05:47.371223 ignition[1120]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:05:47.371265 ignition[1120]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:05:47.371297 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:47.375433 ignition[1120]: PUT result: OK Feb 13 16:05:47.375507 ignition[1120]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 16:05:47.382436 ignition[1120]: GET result: OK Feb 13 16:05:47.382872 ignition[1120]: parsing config with SHA512: cd9073323fc0d4fa047392183e3fc003358c64188ed29118eef0368207acba7afb6c1d6cf99c9a8b40d1749eceadf911d22ded09a4d28f032c5e1beefbb22e16 Feb 13 16:05:47.390570 unknown[1120]: fetched base config from "system" Feb 13 16:05:47.391109 unknown[1120]: fetched base config from "system" Feb 13 16:05:47.391775 ignition[1120]: fetch: fetch complete Feb 13 16:05:47.391124 unknown[1120]: fetched user config from "aws" Feb 13 16:05:47.391787 ignition[1120]: fetch: fetch passed Feb 13 16:05:47.391872 ignition[1120]: Ignition finished successfully Feb 13 16:05:47.403121 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:05:47.416373 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:05:47.451380 ignition[1126]: Ignition 2.19.0 Feb 13 16:05:47.451410 ignition[1126]: Stage: kargs Feb 13 16:05:47.452586 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:47.452615 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:47.452789 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:47.454906 ignition[1126]: PUT result: OK Feb 13 16:05:47.465594 ignition[1126]: kargs: kargs passed Feb 13 16:05:47.465956 ignition[1126]: Ignition finished successfully Feb 13 16:05:47.471456 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:05:47.482731 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:05:47.516214 ignition[1132]: Ignition 2.19.0 Feb 13 16:05:47.516244 ignition[1132]: Stage: disks Feb 13 16:05:47.517913 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:47.517942 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:47.518980 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:47.521574 ignition[1132]: PUT result: OK Feb 13 16:05:47.529900 ignition[1132]: disks: disks passed Feb 13 16:05:47.530036 ignition[1132]: Ignition finished successfully Feb 13 16:05:47.535124 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:05:47.537917 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:05:47.540376 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:05:47.544579 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:05:47.546591 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:05:47.548557 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:05:47.568781 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:05:47.622738 systemd-fsck[1140]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:05:47.632829 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:05:47.645453 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:05:47.748105 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1daf3470-d909-4a02-84d2-f6d9b0a5b55c r/w with ordered data mode. Quota mode: none. Feb 13 16:05:47.749993 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:05:47.753183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:05:47.775225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:05:47.781276 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:05:47.783683 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 16:05:47.783817 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:05:47.783868 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:05:47.806231 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1159) Feb 13 16:05:47.810536 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:47.810600 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:47.812178 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:47.812711 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:05:47.825347 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:05:47.832100 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:47.835216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:05:48.316526 initrd-setup-root[1183]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:05:48.336824 initrd-setup-root[1190]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:05:48.346083 initrd-setup-root[1197]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:05:48.356503 initrd-setup-root[1204]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:05:48.640404 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:05:48.655274 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:05:48.661339 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:05:48.679960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:05:48.681993 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:48.728748 ignition[1272]: INFO : Ignition 2.19.0 Feb 13 16:05:48.728748 ignition[1272]: INFO : Stage: mount Feb 13 16:05:48.728748 ignition[1272]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:48.728748 ignition[1272]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:48.730567 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:05:48.736699 ignition[1272]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:48.736699 ignition[1272]: INFO : PUT result: OK Feb 13 16:05:48.749721 ignition[1272]: INFO : mount: mount passed Feb 13 16:05:48.749721 ignition[1272]: INFO : Ignition finished successfully Feb 13 16:05:48.753781 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:05:48.774576 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:05:48.801293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:05:48.823071 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1283) Feb 13 16:05:48.827469 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:48.827535 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:48.828770 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:48.834127 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:48.838199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:05:48.883462 ignition[1300]: INFO : Ignition 2.19.0 Feb 13 16:05:48.886833 ignition[1300]: INFO : Stage: files Feb 13 16:05:48.886833 ignition[1300]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:48.886833 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:48.886833 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:48.896590 ignition[1300]: INFO : PUT result: OK Feb 13 16:05:48.900446 ignition[1300]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:05:48.904077 ignition[1300]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:05:48.904077 ignition[1300]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:05:48.927315 ignition[1300]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:05:48.930357 ignition[1300]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:05:48.933254 ignition[1300]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:05:48.931509 unknown[1300]: wrote ssh authorized keys file for user: core Feb 13 16:05:48.938486 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:05:48.938486 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 16:05:49.032790 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 16:05:49.137234 systemd-networkd[1111]: eth0: Gained IPv6LL Feb 13 16:05:49.195141 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:05:49.195141 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:05:49.202687 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:05:49.202687 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 16:05:49.209817 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 16:05:49.694977 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 16:05:50.138091 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 16:05:50.138091 ignition[1300]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:05:50.145341 ignition[1300]: INFO : files: files passed Feb 13 16:05:50.145341 ignition[1300]: INFO : Ignition finished successfully Feb 13 16:05:50.172090 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:05:50.183392 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:05:50.195559 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:05:50.207526 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:05:50.207773 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:05:50.225559 initrd-setup-root-after-ignition[1328]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:50.225559 initrd-setup-root-after-ignition[1328]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:50.233835 initrd-setup-root-after-ignition[1332]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:50.240015 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:05:50.244278 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:05:50.260302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:05:50.320612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:05:50.320806 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:05:50.324575 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:05:50.332412 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:05:50.334518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:05:50.350340 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:05:50.385307 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:05:50.395458 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:05:50.427428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:50.432410 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:50.436068 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:05:50.440652 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:05:50.440938 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:05:50.444581 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:05:50.452876 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:05:50.456582 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:05:50.458990 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:05:50.461932 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:05:50.469935 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:05:50.472692 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:05:50.480074 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:05:50.483559 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:05:50.486130 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:05:50.487901 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:05:50.488186 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:05:50.491113 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:50.502716 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:50.505380 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:05:50.509606 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:50.512261 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:05:50.512510 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:05:50.515338 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:05:50.515620 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:05:50.519399 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:05:50.519738 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:05:50.541275 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:05:50.561270 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:05:50.563120 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:05:50.563819 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:50.575442 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:05:50.575711 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:05:50.590724 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:05:50.592647 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:05:50.615116 ignition[1352]: INFO : Ignition 2.19.0 Feb 13 16:05:50.620202 ignition[1352]: INFO : Stage: umount Feb 13 16:05:50.620202 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:50.620202 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:50.620202 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:50.626393 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:05:50.644357 ignition[1352]: INFO : PUT result: OK Feb 13 16:05:50.650840 ignition[1352]: INFO : umount: umount passed Feb 13 16:05:50.650840 ignition[1352]: INFO : Ignition finished successfully Feb 13 16:05:50.652703 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:05:50.652909 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:05:50.661735 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:05:50.663590 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:05:50.665887 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:05:50.665976 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:05:50.668006 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:05:50.668946 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:05:50.678519 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:05:50.678646 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:05:50.680553 systemd[1]: Stopped target network.target - Network. Feb 13 16:05:50.682132 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:05:50.682243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:05:50.684691 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:05:50.687993 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:05:50.688176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:50.690503 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:05:50.692215 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:05:50.694116 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:05:50.694199 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:05:50.696166 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:05:50.696248 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:05:50.698162 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:05:50.698268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:05:50.700230 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:05:50.700314 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:05:50.702371 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:05:50.702474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:05:50.705384 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:05:50.718462 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:05:50.725166 systemd-networkd[1111]: eth0: DHCPv6 lease lost Feb 13 16:05:50.730769 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:05:50.731012 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:05:50.741265 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:05:50.741559 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:05:50.747970 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:05:50.748156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:50.757675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:05:50.792530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:05:50.792705 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:05:50.795594 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:05:50.795722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:50.798374 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:05:50.798511 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:50.801488 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:05:50.801603 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:50.804394 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:50.854560 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:05:50.855479 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:50.864106 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:05:50.864271 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:50.870637 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:05:50.870733 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:50.872872 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:05:50.872973 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:05:50.875282 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:05:50.875384 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:05:50.877776 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:05:50.877876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:50.896248 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:05:50.903367 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:05:50.903502 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:50.903683 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:05:50.903775 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:50.905024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:05:50.905141 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:50.908253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:05:50.908363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:50.909664 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:05:50.912157 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:05:50.920124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:05:50.920346 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:05:50.920941 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:05:50.924334 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:05:50.947536 systemd[1]: Switching root. Feb 13 16:05:51.018344 systemd-journald[250]: Journal stopped Feb 13 16:05:53.680879 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Feb 13 16:05:53.681020 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:05:53.681088 kernel: SELinux: policy capability open_perms=1 Feb 13 16:05:53.681122 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:05:53.681151 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:05:53.681188 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:05:53.681221 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:05:53.681252 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:05:53.681282 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:05:53.681314 kernel: audit: type=1403 audit(1739462751.649:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:05:53.681354 systemd[1]: Successfully loaded SELinux policy in 61.299ms. Feb 13 16:05:53.681403 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.300ms. Feb 13 16:05:53.681436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:05:53.681473 systemd[1]: Detected virtualization amazon. Feb 13 16:05:53.681509 systemd[1]: Detected architecture arm64. Feb 13 16:05:53.681542 systemd[1]: Detected first boot. Feb 13 16:05:53.681575 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:05:53.681609 zram_generator::config[1394]: No configuration found. Feb 13 16:05:53.681646 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:05:53.681679 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 16:05:53.681713 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 16:05:53.681747 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 16:05:53.681784 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:05:53.683798 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:05:53.683863 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:05:53.683895 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:05:53.683928 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:05:53.683964 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:05:53.683995 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:05:53.684027 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:05:53.687308 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:53.687356 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:53.687388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:05:53.687422 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:05:53.687457 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:05:53.687489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:05:53.687522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:05:53.687556 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:53.687588 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 16:05:53.687624 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 16:05:53.687655 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 16:05:53.687699 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:05:53.687731 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:53.687766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:05:53.687797 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:05:53.687831 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:05:53.687861 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:05:53.687896 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:05:53.687926 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:53.687961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:53.687993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:53.688024 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:05:53.688080 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:05:53.688116 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:05:53.688149 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:05:53.688181 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:05:53.688219 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:05:53.688249 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:05:53.688280 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:05:53.688310 systemd[1]: Reached target machines.target - Containers. Feb 13 16:05:53.688339 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:05:53.688369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:53.688399 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:05:53.688429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:05:53.688461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:53.688496 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:05:53.688526 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:53.688556 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:05:53.688590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:53.688621 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:05:53.688653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 16:05:53.688706 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 16:05:53.688739 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 16:05:53.688774 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 16:05:53.688806 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:05:53.688836 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:05:53.688866 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:05:53.688897 kernel: fuse: init (API version 7.39) Feb 13 16:05:53.688926 kernel: loop: module loaded Feb 13 16:05:53.688955 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:05:53.688986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:05:53.689018 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 16:05:53.692697 systemd[1]: Stopped verity-setup.service. Feb 13 16:05:53.692752 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:05:53.692785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:05:53.692817 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:05:53.692849 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:05:53.694218 systemd-journald[1477]: Collecting audit messages is disabled. Feb 13 16:05:53.694323 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:05:53.694358 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:05:53.694390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:53.694420 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:05:53.694455 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:05:53.694488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:53.694518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:53.694553 kernel: ACPI: bus type drm_connector registered Feb 13 16:05:53.694587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:53.694617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:53.694649 systemd-journald[1477]: Journal started Feb 13 16:05:53.694699 systemd-journald[1477]: Runtime Journal (/run/log/journal/ec2e593eb141d686ada3a20679dadf74) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:05:53.067605 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:05:53.134704 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 16:05:53.135778 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 16:05:53.704661 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:05:53.704738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:05:53.704777 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:05:53.711547 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:05:53.712061 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:05:53.715192 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:53.716939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:53.721207 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:53.724813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:05:53.729919 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:05:53.740198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:05:53.771922 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:05:53.783315 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:05:53.794882 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:05:53.798227 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:05:53.798296 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:05:53.803503 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:05:53.810443 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:05:53.821540 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:05:53.824401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:53.836393 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:05:53.841410 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:05:53.843685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:05:53.855505 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:05:53.858343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:05:53.870376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:53.882976 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:05:53.905364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:05:53.911467 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:05:53.914463 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:05:53.917570 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:05:53.962795 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:05:53.965426 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:05:53.984099 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 16:05:53.986806 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:05:53.992419 systemd-journald[1477]: Time spent on flushing to /var/log/journal/ec2e593eb141d686ada3a20679dadf74 is 115.195ms for 911 entries. Feb 13 16:05:53.992419 systemd-journald[1477]: System Journal (/var/log/journal/ec2e593eb141d686ada3a20679dadf74) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:05:54.143265 systemd-journald[1477]: Received client request to flush runtime journal. Feb 13 16:05:54.143470 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:05:54.063421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:54.077672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:54.097563 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:05:54.109546 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Feb 13 16:05:54.109571 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Feb 13 16:05:54.136253 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:54.150628 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:05:54.155838 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:05:54.163601 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:05:54.166239 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:05:54.188147 udevadm[1537]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 16:05:54.194504 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 16:05:54.275298 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:05:54.286497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:05:54.331584 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Feb 13 16:05:54.331625 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Feb 13 16:05:54.341479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:54.367109 kernel: loop2: detected capacity change from 0 to 52536 Feb 13 16:05:54.507107 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 16:05:54.613086 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 16:05:54.634080 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 16:05:54.666087 kernel: loop6: detected capacity change from 0 to 52536 Feb 13 16:05:54.677079 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 16:05:54.691814 (sd-merge)[1553]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 16:05:54.693489 (sd-merge)[1553]: Merged extensions into '/usr'. Feb 13 16:05:54.704850 systemd[1]: Reloading requested from client PID 1524 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:05:54.704883 systemd[1]: Reloading... Feb 13 16:05:54.861614 zram_generator::config[1575]: No configuration found. Feb 13 16:05:55.256151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:55.386993 systemd[1]: Reloading finished in 681 ms. Feb 13 16:05:55.432992 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:05:55.437967 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:05:55.455367 systemd[1]: Starting ensure-sysext.service... Feb 13 16:05:55.468588 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:05:55.475343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:55.499269 systemd[1]: Reloading requested from client PID 1631 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:05:55.499312 systemd[1]: Reloading... Feb 13 16:05:55.556847 systemd-tmpfiles[1632]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:05:55.560145 systemd-tmpfiles[1632]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:05:55.562294 systemd-tmpfiles[1632]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:05:55.562976 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Feb 13 16:05:55.563328 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Feb 13 16:05:55.577832 systemd-tmpfiles[1632]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:05:55.579131 systemd-tmpfiles[1632]: Skipping /boot Feb 13 16:05:55.626698 systemd-udevd[1633]: Using default interface naming scheme 'v255'. Feb 13 16:05:55.641074 ldconfig[1519]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:05:55.649866 systemd-tmpfiles[1632]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:05:55.651290 systemd-tmpfiles[1632]: Skipping /boot Feb 13 16:05:55.661249 zram_generator::config[1657]: No configuration found. Feb 13 16:05:55.864656 (udev-worker)[1687]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:05:56.067641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:56.209097 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1687) Feb 13 16:05:56.232676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 16:05:56.232981 systemd[1]: Reloading finished in 732 ms. Feb 13 16:05:56.276373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:56.282121 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:05:56.285070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:56.401784 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:05:56.417213 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:05:56.419769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:56.425315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:56.433583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:56.439622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:56.441859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:56.448628 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:05:56.458604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:05:56.469570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:05:56.477519 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:05:56.501796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:56.506194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:56.506830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:56.511452 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:56.514162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:56.545500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:56.552700 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:05:56.562010 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:56.569611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:56.572720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:56.573448 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:05:56.590968 systemd[1]: Finished ensure-sysext.service. Feb 13 16:05:56.603368 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:05:56.606914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:56.607605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:56.647498 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:05:56.679204 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:05:56.679585 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:05:56.710489 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:56.710796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:56.713692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:56.714831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:56.733258 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:05:56.751540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:05:56.751775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:05:56.760535 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:05:56.812371 augenrules[1868]: No rules Feb 13 16:05:56.813825 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:05:56.828418 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:05:56.836361 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:05:56.839023 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:05:56.856965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:05:56.867536 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:05:56.872294 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:05:56.889281 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:05:56.898534 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:05:56.909138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:56.956100 lvm[1880]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:05:56.962152 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:05:57.006161 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:05:57.009361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:57.028348 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:05:57.065128 lvm[1891]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:05:57.077415 systemd-networkd[1832]: lo: Link UP Feb 13 16:05:57.077431 systemd-networkd[1832]: lo: Gained carrier Feb 13 16:05:57.080730 systemd-networkd[1832]: Enumeration completed Feb 13 16:05:57.080931 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:05:57.083446 systemd-networkd[1832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:57.083454 systemd-networkd[1832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:05:57.086283 systemd-networkd[1832]: eth0: Link UP Feb 13 16:05:57.086800 systemd-networkd[1832]: eth0: Gained carrier Feb 13 16:05:57.087145 systemd-networkd[1832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:57.096359 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:05:57.099178 systemd-networkd[1832]: eth0: DHCPv4 address 172.31.18.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:05:57.135867 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:05:57.150346 systemd-resolved[1834]: Positive Trust Anchors: Feb 13 16:05:57.150864 systemd-resolved[1834]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:05:57.150932 systemd-resolved[1834]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:05:57.160681 systemd-resolved[1834]: Defaulting to hostname 'linux'. Feb 13 16:05:57.164193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:05:57.166783 systemd[1]: Reached target network.target - Network. Feb 13 16:05:57.168716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:57.170991 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:05:57.173302 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:05:57.175669 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:05:57.178324 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:05:57.180573 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:05:57.182898 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:05:57.185210 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:05:57.185279 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:05:57.186954 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:05:57.190501 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:05:57.195352 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:05:57.206438 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:05:57.209648 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:05:57.212019 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:05:57.214200 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:05:57.216020 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:05:57.216092 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:05:57.219320 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:05:57.226420 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:05:57.233497 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:05:57.248410 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:05:57.255152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:05:57.257178 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:05:57.268573 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:05:57.281463 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 16:05:57.301435 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 16:05:57.314104 jq[1900]: false Feb 13 16:05:57.315342 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 16:05:57.361008 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:05:57.367910 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:05:57.384349 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:05:57.389246 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:05:57.390155 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:05:57.393421 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:05:57.401338 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:05:57.431722 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:05:57.432140 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:05:57.436204 dbus-daemon[1899]: [system] SELinux support is enabled Feb 13 16:05:57.438554 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:05:57.453186 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:05:57.453603 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:05:57.472754 coreos-metadata[1898]: Feb 13 16:05:57.472 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:05:57.476708 coreos-metadata[1898]: Feb 13 16:05:57.476 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 16:05:57.477621 coreos-metadata[1898]: Feb 13 16:05:57.477 INFO Fetch successful Feb 13 16:05:57.479249 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:05:57.479363 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:05:57.482168 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:05:57.487328 coreos-metadata[1898]: Feb 13 16:05:57.479 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 16:05:57.487328 coreos-metadata[1898]: Feb 13 16:05:57.484 INFO Fetch successful Feb 13 16:05:57.487328 coreos-metadata[1898]: Feb 13 16:05:57.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 16:05:57.482220 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:05:57.492326 coreos-metadata[1898]: Feb 13 16:05:57.490 INFO Fetch successful Feb 13 16:05:57.492326 coreos-metadata[1898]: Feb 13 16:05:57.490 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 16:05:57.493383 coreos-metadata[1898]: Feb 13 16:05:57.493 INFO Fetch successful Feb 13 16:05:57.493383 coreos-metadata[1898]: Feb 13 16:05:57.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 16:05:57.501074 coreos-metadata[1898]: Feb 13 16:05:57.494 INFO Fetch failed with 404: resource not found Feb 13 16:05:57.501074 coreos-metadata[1898]: Feb 13 16:05:57.494 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 16:05:57.501074 coreos-metadata[1898]: Feb 13 16:05:57.497 INFO Fetch successful Feb 13 16:05:57.501074 coreos-metadata[1898]: Feb 13 16:05:57.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 16:05:57.503148 coreos-metadata[1898]: Feb 13 16:05:57.502 INFO Fetch successful Feb 13 16:05:57.503148 coreos-metadata[1898]: Feb 13 16:05:57.502 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 16:05:57.507533 coreos-metadata[1898]: Feb 13 16:05:57.507 INFO Fetch successful Feb 13 16:05:57.507533 coreos-metadata[1898]: Feb 13 16:05:57.507 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 16:05:57.511357 coreos-metadata[1898]: Feb 13 16:05:57.510 INFO Fetch successful Feb 13 16:05:57.511357 coreos-metadata[1898]: Feb 13 16:05:57.510 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 16:05:57.513380 coreos-metadata[1898]: Feb 13 16:05:57.512 INFO Fetch successful Feb 13 16:05:57.512745 dbus-daemon[1899]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1832 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 16:05:57.516272 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:05:57.516802 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:05:57.532968 extend-filesystems[1901]: Found loop4 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found loop5 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found loop6 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found loop7 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1p1 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1p2 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1p3 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found usr Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1p4 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1p6 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1p7 Feb 13 16:05:57.532968 extend-filesystems[1901]: Found nvme0n1p9 Feb 13 16:05:57.532968 extend-filesystems[1901]: Checking size of /dev/nvme0n1p9 Feb 13 16:05:57.549449 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 16:05:57.578528 update_engine[1914]: I20250213 16:05:57.570367 1914 main.cc:92] Flatcar Update Engine starting Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: ---------------------------------------------------- Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: corporation. Support and training for ntp-4 are Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: available at https://www.nwtime.org/support Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: ---------------------------------------------------- Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: proto: precision = 0.096 usec (-23) Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: basedate set to 2025-02-01 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: gps base set to 2025-02-02 (week 2352) Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Listen normally on 3 eth0 172.31.18.147:123 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Listen normally on 4 lo [::1]:123 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: bind(21) AF_INET6 fe80::420:e0ff:fee1:bc63%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: unable to create socket on eth0 (5) for fe80::420:e0ff:fee1:bc63%2#123 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: failed to init interface for address fe80::420:e0ff:fee1:bc63%2 Feb 13 16:05:57.595490 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: Listening on routing socket on fd #21 for interface updates Feb 13 16:05:57.564438 ntpd[1903]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:05:57.609011 update_engine[1914]: I20250213 16:05:57.588517 1914 update_check_scheduler.cc:74] Next update check in 5m28s Feb 13 16:05:57.595155 (ntainerd)[1932]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:05:57.609700 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:57.609700 ntpd[1903]: 13 Feb 16:05:57 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:57.564503 ntpd[1903]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:05:57.596300 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:05:57.564526 ntpd[1903]: ---------------------------------------------------- Feb 13 16:05:57.603402 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:05:57.564545 ntpd[1903]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:05:57.564564 ntpd[1903]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:05:57.564581 ntpd[1903]: corporation. Support and training for ntp-4 are Feb 13 16:05:57.564600 ntpd[1903]: available at https://www.nwtime.org/support Feb 13 16:05:57.564619 ntpd[1903]: ---------------------------------------------------- Feb 13 16:05:57.571793 ntpd[1903]: proto: precision = 0.096 usec (-23) Feb 13 16:05:57.574595 ntpd[1903]: basedate set to 2025-02-01 Feb 13 16:05:57.620938 jq[1915]: true Feb 13 16:05:57.574655 ntpd[1903]: gps base set to 2025-02-02 (week 2352) Feb 13 16:05:57.580535 ntpd[1903]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:05:57.580614 ntpd[1903]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:05:57.581444 ntpd[1903]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:05:57.581511 ntpd[1903]: Listen normally on 3 eth0 172.31.18.147:123 Feb 13 16:05:57.581579 ntpd[1903]: Listen normally on 4 lo [::1]:123 Feb 13 16:05:57.581655 ntpd[1903]: bind(21) AF_INET6 fe80::420:e0ff:fee1:bc63%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:05:57.581892 ntpd[1903]: unable to create socket on eth0 (5) for fe80::420:e0ff:fee1:bc63%2#123 Feb 13 16:05:57.581928 ntpd[1903]: failed to init interface for address fe80::420:e0ff:fee1:bc63%2 Feb 13 16:05:57.581989 ntpd[1903]: Listening on routing socket on fd #21 for interface updates Feb 13 16:05:57.608092 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:57.608139 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:57.656442 extend-filesystems[1901]: Resized partition /dev/nvme0n1p9 Feb 13 16:05:57.664598 extend-filesystems[1952]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:05:57.685645 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 16:05:57.685760 tar[1920]: linux-arm64/helm Feb 13 16:05:57.713144 jq[1945]: true Feb 13 16:05:57.746416 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:05:57.749634 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:05:57.818958 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 16:05:57.825096 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 16:05:57.846168 extend-filesystems[1952]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 16:05:57.846168 extend-filesystems[1952]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 16:05:57.846168 extend-filesystems[1952]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 16:05:57.872305 extend-filesystems[1901]: Resized filesystem in /dev/nvme0n1p9 Feb 13 16:05:57.858575 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:05:57.861132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:05:57.922386 systemd-logind[1913]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 16:05:57.923125 systemd-logind[1913]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 16:05:57.925159 systemd-logind[1913]: New seat seat0. Feb 13 16:05:57.927171 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:05:58.003401 bash[1987]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:05:58.054376 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1713) Feb 13 16:05:58.062542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:05:58.072274 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:05:58.082192 locksmithd[1943]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:05:58.121819 systemd[1]: Starting sshkeys.service... Feb 13 16:05:58.196172 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:05:58.249611 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 16:05:58.254665 dbus-daemon[1899]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1935 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 16:05:58.310941 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:05:58.313963 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 16:05:58.383856 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 16:05:58.436935 polkitd[2071]: Started polkitd version 121 Feb 13 16:05:58.465109 polkitd[2071]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 16:05:58.465238 polkitd[2071]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 16:05:58.473610 polkitd[2071]: Finished loading, compiling and executing 2 rules Feb 13 16:05:58.477986 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 16:05:58.480367 polkitd[2071]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 16:05:58.486345 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 16:05:58.511422 containerd[1932]: time="2025-02-13T16:05:58.511267271Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 16:05:58.549624 systemd-hostnamed[1935]: Hostname set to (transient) Feb 13 16:05:58.549662 systemd-resolved[1834]: System hostname changed to 'ip-172-31-18-147'. Feb 13 16:05:58.560507 coreos-metadata[2038]: Feb 13 16:05:58.559 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:05:58.561964 coreos-metadata[2038]: Feb 13 16:05:58.561 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 16:05:58.565805 coreos-metadata[2038]: Feb 13 16:05:58.565 INFO Fetch successful Feb 13 16:05:58.565805 coreos-metadata[2038]: Feb 13 16:05:58.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 16:05:58.567330 coreos-metadata[2038]: Feb 13 16:05:58.566 INFO Fetch successful Feb 13 16:05:58.567838 ntpd[1903]: bind(24) AF_INET6 fe80::420:e0ff:fee1:bc63%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:05:58.568569 ntpd[1903]: 13 Feb 16:05:58 ntpd[1903]: bind(24) AF_INET6 fe80::420:e0ff:fee1:bc63%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:05:58.568569 ntpd[1903]: 13 Feb 16:05:58 ntpd[1903]: unable to create socket on eth0 (6) for fe80::420:e0ff:fee1:bc63%2#123 Feb 13 16:05:58.568569 ntpd[1903]: 13 Feb 16:05:58 ntpd[1903]: failed to init interface for address fe80::420:e0ff:fee1:bc63%2 Feb 13 16:05:58.567919 ntpd[1903]: unable to create socket on eth0 (6) for fe80::420:e0ff:fee1:bc63%2#123 Feb 13 16:05:58.567950 ntpd[1903]: failed to init interface for address fe80::420:e0ff:fee1:bc63%2 Feb 13 16:05:58.576923 unknown[2038]: wrote ssh authorized keys file for user: core Feb 13 16:05:58.636195 update-ssh-keys[2092]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:05:58.640317 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:05:58.652783 systemd[1]: Finished sshkeys.service. Feb 13 16:05:58.661417 containerd[1932]: time="2025-02-13T16:05:58.661305660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:58.664596 containerd[1932]: time="2025-02-13T16:05:58.664518828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:58.664804 containerd[1932]: time="2025-02-13T16:05:58.664769280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:05:58.664922 containerd[1932]: time="2025-02-13T16:05:58.664892472Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:05:58.665395 containerd[1932]: time="2025-02-13T16:05:58.665357112Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:05:58.665563 containerd[1932]: time="2025-02-13T16:05:58.665532660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:58.665789 containerd[1932]: time="2025-02-13T16:05:58.665753532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:58.666126 containerd[1932]: time="2025-02-13T16:05:58.665866944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:58.667293 containerd[1932]: time="2025-02-13T16:05:58.667204044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:58.668091 containerd[1932]: time="2025-02-13T16:05:58.667438140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:58.668091 containerd[1932]: time="2025-02-13T16:05:58.667490940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:58.668091 containerd[1932]: time="2025-02-13T16:05:58.667519656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:58.668091 containerd[1932]: time="2025-02-13T16:05:58.667733988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:58.668571 containerd[1932]: time="2025-02-13T16:05:58.668526204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:58.668988 containerd[1932]: time="2025-02-13T16:05:58.668937012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:58.669223 containerd[1932]: time="2025-02-13T16:05:58.669185676Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:05:58.669575 containerd[1932]: time="2025-02-13T16:05:58.669537108Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:05:58.669802 containerd[1932]: time="2025-02-13T16:05:58.669766608Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:05:58.679698 containerd[1932]: time="2025-02-13T16:05:58.679524792Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:05:58.679698 containerd[1932]: time="2025-02-13T16:05:58.679635444Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.679671828Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.679946880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.679982760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.680368932Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.680789616Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.680981688Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681073104Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681106800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681140268Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681171564Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681203448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681235764Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681267576Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682098 containerd[1932]: time="2025-02-13T16:05:58.681297852Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681327120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681355560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681395292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681426204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681455496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681485964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681517944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681549156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681577668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681608844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681644088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681679308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681707016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.682778 containerd[1932]: time="2025-02-13T16:05:58.681736272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.683406 containerd[1932]: time="2025-02-13T16:05:58.681769668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.683406 containerd[1932]: time="2025-02-13T16:05:58.681803700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:05:58.683406 containerd[1932]: time="2025-02-13T16:05:58.681845508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.683406 containerd[1932]: time="2025-02-13T16:05:58.681873912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.683406 containerd[1932]: time="2025-02-13T16:05:58.681900984Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:05:58.683406 containerd[1932]: time="2025-02-13T16:05:58.682002864Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:05:58.684129 containerd[1932]: time="2025-02-13T16:05:58.684057768Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:05:58.684249 containerd[1932]: time="2025-02-13T16:05:58.684220896Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:05:58.684404 containerd[1932]: time="2025-02-13T16:05:58.684371916Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:05:58.684505 containerd[1932]: time="2025-02-13T16:05:58.684478188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.684627 containerd[1932]: time="2025-02-13T16:05:58.684600156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:05:58.685169 containerd[1932]: time="2025-02-13T16:05:58.684703176Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:05:58.685169 containerd[1932]: time="2025-02-13T16:05:58.684738156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:05:58.686024 containerd[1932]: time="2025-02-13T16:05:58.685894488Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:05:58.687087 containerd[1932]: time="2025-02-13T16:05:58.686392548Z" level=info msg="Connect containerd service" Feb 13 16:05:58.687087 containerd[1932]: time="2025-02-13T16:05:58.686521836Z" level=info msg="using legacy CRI server" Feb 13 16:05:58.687087 containerd[1932]: time="2025-02-13T16:05:58.686543292Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:05:58.687087 containerd[1932]: time="2025-02-13T16:05:58.687009648Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:05:58.698351 containerd[1932]: time="2025-02-13T16:05:58.695016972Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:05:58.701178 containerd[1932]: time="2025-02-13T16:05:58.700594920Z" level=info msg="Start subscribing containerd event" Feb 13 16:05:58.702222 containerd[1932]: time="2025-02-13T16:05:58.701701008Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:05:58.704822 containerd[1932]: time="2025-02-13T16:05:58.702536208Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:05:58.705458 containerd[1932]: time="2025-02-13T16:05:58.705413448Z" level=info msg="Start recovering state" Feb 13 16:05:58.705709 containerd[1932]: time="2025-02-13T16:05:58.705682884Z" level=info msg="Start event monitor" Feb 13 16:05:58.705810 containerd[1932]: time="2025-02-13T16:05:58.705784680Z" level=info msg="Start snapshots syncer" Feb 13 16:05:58.705909 containerd[1932]: time="2025-02-13T16:05:58.705883524Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:05:58.706004 containerd[1932]: time="2025-02-13T16:05:58.705979032Z" level=info msg="Start streaming server" Feb 13 16:05:58.706296 containerd[1932]: time="2025-02-13T16:05:58.706264908Z" level=info msg="containerd successfully booted in 0.206635s" Feb 13 16:05:58.706388 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:05:58.865256 systemd-networkd[1832]: eth0: Gained IPv6LL Feb 13 16:05:58.874129 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:05:58.877495 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:05:58.890844 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 16:05:58.907515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:58.916865 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:05:59.067165 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:05:59.070764 amazon-ssm-agent[2105]: Initializing new seelog logger Feb 13 16:05:59.075604 amazon-ssm-agent[2105]: New Seelog Logger Creation Complete Feb 13 16:05:59.075604 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.075604 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.079751 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 processing appconfig overrides Feb 13 16:05:59.085169 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO Proxy environment variables: Feb 13 16:05:59.085318 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.085735 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.085911 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 processing appconfig overrides Feb 13 16:05:59.087480 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.087779 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.090090 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 processing appconfig overrides Feb 13 16:05:59.100096 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.100096 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:59.100096 amazon-ssm-agent[2105]: 2025/02/13 16:05:59 processing appconfig overrides Feb 13 16:05:59.184726 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO https_proxy: Feb 13 16:05:59.286142 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO http_proxy: Feb 13 16:05:59.385154 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO no_proxy: Feb 13 16:05:59.486278 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO Checking if agent identity type OnPrem can be assumed Feb 13 16:05:59.567882 tar[1920]: linux-arm64/LICENSE Feb 13 16:05:59.567882 tar[1920]: linux-arm64/README.md Feb 13 16:05:59.592149 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO Checking if agent identity type EC2 can be assumed Feb 13 16:05:59.610156 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 16:05:59.689513 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO Agent will take identity from EC2 Feb 13 16:05:59.788896 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:05:59.889254 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:05:59.965936 sshd_keygen[1939]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:05:59.988614 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:06:00.020447 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:06:00.033231 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:06:00.047727 systemd[1]: Started sshd@0-172.31.18.147:22-139.178.68.195:56184.service - OpenSSH per-connection server daemon (139.178.68.195:56184). Feb 13 16:06:00.084854 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:06:00.086718 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:06:00.091402 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 16:06:00.099235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:06:00.181608 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:06:00.190157 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 16:06:00.198409 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:06:00.213326 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:06:00.216766 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:06:00.290172 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 16:06:00.393278 sshd[2136]: Accepted publickey for core from 139.178.68.195 port 56184 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:00.394517 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 16:06:00.400535 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:00.432868 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:06:00.444448 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:06:00.457852 systemd-logind[1913]: New session 1 of user core. Feb 13 16:06:00.495260 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [Registrar] Starting registrar module Feb 13 16:06:00.504307 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:06:00.521806 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:06:00.552245 (systemd)[2147]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:06:00.596612 amazon-ssm-agent[2105]: 2025-02-13 16:05:59 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 16:06:00.903327 systemd[2147]: Queued start job for default target default.target. Feb 13 16:06:00.909878 systemd[2147]: Created slice app.slice - User Application Slice. Feb 13 16:06:00.910307 systemd[2147]: Reached target paths.target - Paths. Feb 13 16:06:00.910440 systemd[2147]: Reached target timers.target - Timers. Feb 13 16:06:00.914412 systemd[2147]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:06:00.958748 systemd[2147]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:06:00.959066 systemd[2147]: Reached target sockets.target - Sockets. Feb 13 16:06:00.959137 systemd[2147]: Reached target basic.target - Basic System. Feb 13 16:06:00.959255 systemd[2147]: Reached target default.target - Main User Target. Feb 13 16:06:00.959323 systemd[2147]: Startup finished in 389ms. Feb 13 16:06:00.959743 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:06:00.973346 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:06:00.991232 amazon-ssm-agent[2105]: 2025-02-13 16:06:00 INFO [EC2Identity] EC2 registration was successful. Feb 13 16:06:01.016260 amazon-ssm-agent[2105]: 2025-02-13 16:06:00 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 16:06:01.016387 amazon-ssm-agent[2105]: 2025-02-13 16:06:00 INFO [CredentialRefresher] credentialRefresher has started Feb 13 16:06:01.016387 amazon-ssm-agent[2105]: 2025-02-13 16:06:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 16:06:01.091630 amazon-ssm-agent[2105]: 2025-02-13 16:06:01 INFO [CredentialRefresher] Next credential rotation will be in 30.241657260566665 minutes Feb 13 16:06:01.137618 systemd[1]: Started sshd@1-172.31.18.147:22-139.178.68.195:52094.service - OpenSSH per-connection server daemon (139.178.68.195:52094). Feb 13 16:06:01.337997 sshd[2158]: Accepted publickey for core from 139.178.68.195 port 52094 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:01.340761 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:01.349581 systemd-logind[1913]: New session 2 of user core. Feb 13 16:06:01.369867 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:06:01.399282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:01.402750 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:06:01.405924 systemd[1]: Startup finished in 1.263s (kernel) + 8.836s (initrd) + 9.816s (userspace) = 19.916s. Feb 13 16:06:01.418639 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:01.504581 sshd[2158]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:01.513087 systemd[1]: sshd@1-172.31.18.147:22-139.178.68.195:52094.service: Deactivated successfully. Feb 13 16:06:01.518733 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:06:01.520578 systemd-logind[1913]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:06:01.523609 systemd-logind[1913]: Removed session 2. Feb 13 16:06:01.545585 systemd[1]: Started sshd@2-172.31.18.147:22-139.178.68.195:52104.service - OpenSSH per-connection server daemon (139.178.68.195:52104). Feb 13 16:06:01.565336 ntpd[1903]: Listen normally on 7 eth0 [fe80::420:e0ff:fee1:bc63%2]:123 Feb 13 16:06:01.567166 ntpd[1903]: 13 Feb 16:06:01 ntpd[1903]: Listen normally on 7 eth0 [fe80::420:e0ff:fee1:bc63%2]:123 Feb 13 16:06:01.731101 sshd[2175]: Accepted publickey for core from 139.178.68.195 port 52104 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:01.734350 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:01.745627 systemd-logind[1913]: New session 3 of user core. Feb 13 16:06:01.757445 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:06:01.882908 sshd[2175]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:01.889222 systemd[1]: sshd@2-172.31.18.147:22-139.178.68.195:52104.service: Deactivated successfully. Feb 13 16:06:01.895915 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:06:01.898372 systemd-logind[1913]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:06:01.900608 systemd-logind[1913]: Removed session 3. Feb 13 16:06:01.925223 systemd[1]: Started sshd@3-172.31.18.147:22-139.178.68.195:52110.service - OpenSSH per-connection server daemon (139.178.68.195:52110). Feb 13 16:06:02.070658 amazon-ssm-agent[2105]: 2025-02-13 16:06:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 16:06:02.111001 sshd[2186]: Accepted publickey for core from 139.178.68.195 port 52110 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:02.114223 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:02.123455 systemd-logind[1913]: New session 4 of user core. Feb 13 16:06:02.131390 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:06:02.171076 amazon-ssm-agent[2105]: 2025-02-13 16:06:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2189) started Feb 13 16:06:02.274379 sshd[2186]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:02.275857 amazon-ssm-agent[2105]: 2025-02-13 16:06:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 16:06:02.282869 systemd[1]: sshd@3-172.31.18.147:22-139.178.68.195:52110.service: Deactivated successfully. Feb 13 16:06:02.288683 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:06:02.293628 systemd-logind[1913]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:06:02.317707 systemd[1]: Started sshd@4-172.31.18.147:22-139.178.68.195:52120.service - OpenSSH per-connection server daemon (139.178.68.195:52120). Feb 13 16:06:02.324950 systemd-logind[1913]: Removed session 4. Feb 13 16:06:02.522522 sshd[2202]: Accepted publickey for core from 139.178.68.195 port 52120 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:02.525395 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:02.536348 systemd-logind[1913]: New session 5 of user core. Feb 13 16:06:02.542376 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:06:02.689917 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:06:02.690718 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:06:02.713494 sudo[2209]: pam_unix(sudo:session): session closed for user root Feb 13 16:06:02.737906 sshd[2202]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:02.747546 systemd[1]: sshd@4-172.31.18.147:22-139.178.68.195:52120.service: Deactivated successfully. Feb 13 16:06:02.752627 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:06:02.755024 systemd-logind[1913]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:06:02.759541 systemd-logind[1913]: Removed session 5. Feb 13 16:06:02.782246 systemd[1]: Started sshd@5-172.31.18.147:22-139.178.68.195:52128.service - OpenSSH per-connection server daemon (139.178.68.195:52128). Feb 13 16:06:02.845663 kubelet[2166]: E0213 16:06:02.845596 2166 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:02.851109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:02.851529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:02.852297 systemd[1]: kubelet.service: Consumed 1.387s CPU time. Feb 13 16:06:02.967005 sshd[2214]: Accepted publickey for core from 139.178.68.195 port 52128 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:02.969818 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:02.980660 systemd-logind[1913]: New session 6 of user core. Feb 13 16:06:02.990423 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:06:03.102226 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:06:03.103596 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:06:03.110180 sudo[2219]: pam_unix(sudo:session): session closed for user root Feb 13 16:06:03.120815 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 16:06:03.121580 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:06:03.143598 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 16:06:03.158186 auditctl[2222]: No rules Feb 13 16:06:03.159185 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:06:03.159637 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 16:06:03.173804 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:06:03.215321 augenrules[2240]: No rules Feb 13 16:06:03.217813 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:06:03.221616 sudo[2218]: pam_unix(sudo:session): session closed for user root Feb 13 16:06:03.246416 sshd[2214]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:03.255323 systemd[1]: sshd@5-172.31.18.147:22-139.178.68.195:52128.service: Deactivated successfully. Feb 13 16:06:03.258843 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:06:03.260447 systemd-logind[1913]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:06:03.262585 systemd-logind[1913]: Removed session 6. Feb 13 16:06:03.286566 systemd[1]: Started sshd@6-172.31.18.147:22-139.178.68.195:52132.service - OpenSSH per-connection server daemon (139.178.68.195:52132). Feb 13 16:06:03.468103 sshd[2248]: Accepted publickey for core from 139.178.68.195 port 52132 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:03.471773 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:03.480874 systemd-logind[1913]: New session 7 of user core. Feb 13 16:06:03.492388 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:06:03.598714 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:06:03.599550 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:06:04.211850 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:06:04.223676 (dockerd)[2266]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:06:04.720553 systemd-resolved[1834]: Clock change detected. Flushing caches. Feb 13 16:06:04.898754 dockerd[2266]: time="2025-02-13T16:06:04.898648860Z" level=info msg="Starting up" Feb 13 16:06:05.152318 systemd[1]: var-lib-docker-metacopy\x2dcheck725296088-merged.mount: Deactivated successfully. Feb 13 16:06:05.165384 dockerd[2266]: time="2025-02-13T16:06:05.165304282Z" level=info msg="Loading containers: start." Feb 13 16:06:05.376778 kernel: Initializing XFRM netlink socket Feb 13 16:06:05.466141 (udev-worker)[2290]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:05.556294 systemd-networkd[1832]: docker0: Link UP Feb 13 16:06:05.576194 dockerd[2266]: time="2025-02-13T16:06:05.576017892Z" level=info msg="Loading containers: done." Feb 13 16:06:05.603968 dockerd[2266]: time="2025-02-13T16:06:05.603897252Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:06:05.604325 dockerd[2266]: time="2025-02-13T16:06:05.604049712Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 16:06:05.604325 dockerd[2266]: time="2025-02-13T16:06:05.604275552Z" level=info msg="Daemon has completed initialization" Feb 13 16:06:05.667955 dockerd[2266]: time="2025-02-13T16:06:05.667834080Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:06:05.668453 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:06:07.205025 containerd[1932]: time="2025-02-13T16:06:07.204959220Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 16:06:07.832925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88948801.mount: Deactivated successfully. Feb 13 16:06:10.388043 containerd[1932]: time="2025-02-13T16:06:10.387729556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:10.390886 containerd[1932]: time="2025-02-13T16:06:10.390765856Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 16:06:10.391672 containerd[1932]: time="2025-02-13T16:06:10.391567444Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:10.404768 containerd[1932]: time="2025-02-13T16:06:10.404690452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:10.409437 containerd[1932]: time="2025-02-13T16:06:10.408886528Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 3.20385946s" Feb 13 16:06:10.409437 containerd[1932]: time="2025-02-13T16:06:10.408979612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 16:06:10.451106 containerd[1932]: time="2025-02-13T16:06:10.451011568Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 16:06:13.238537 containerd[1932]: time="2025-02-13T16:06:13.237213174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:13.239730 containerd[1932]: time="2025-02-13T16:06:13.239663850Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 16:06:13.241185 containerd[1932]: time="2025-02-13T16:06:13.241112538Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:13.247028 containerd[1932]: time="2025-02-13T16:06:13.246960198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:13.249526 containerd[1932]: time="2025-02-13T16:06:13.249283722Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.798209658s" Feb 13 16:06:13.249526 containerd[1932]: time="2025-02-13T16:06:13.249355290Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 16:06:13.256276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:06:13.268328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:13.297857 containerd[1932]: time="2025-02-13T16:06:13.297797262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 16:06:13.613825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:13.625362 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:13.730275 kubelet[2483]: E0213 16:06:13.730178 2483 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:13.738557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:13.738901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:15.235556 containerd[1932]: time="2025-02-13T16:06:15.234337808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:15.237183 containerd[1932]: time="2025-02-13T16:06:15.236764892Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 16:06:15.238508 containerd[1932]: time="2025-02-13T16:06:15.238388000Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:15.244324 containerd[1932]: time="2025-02-13T16:06:15.244237364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:15.246875 containerd[1932]: time="2025-02-13T16:06:15.246660644Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.94880031s" Feb 13 16:06:15.246875 containerd[1932]: time="2025-02-13T16:06:15.246723428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 16:06:15.287435 containerd[1932]: time="2025-02-13T16:06:15.287316344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 16:06:16.602830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517624861.mount: Deactivated successfully. Feb 13 16:06:17.191822 containerd[1932]: time="2025-02-13T16:06:17.191171361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:17.192888 containerd[1932]: time="2025-02-13T16:06:17.192781317Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 16:06:17.195913 containerd[1932]: time="2025-02-13T16:06:17.195762105Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:17.201672 containerd[1932]: time="2025-02-13T16:06:17.201032265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:17.202858 containerd[1932]: time="2025-02-13T16:06:17.202767706Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.915384978s" Feb 13 16:06:17.202858 containerd[1932]: time="2025-02-13T16:06:17.202850026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 16:06:17.245869 containerd[1932]: time="2025-02-13T16:06:17.245801278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:06:17.863075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219492786.mount: Deactivated successfully. Feb 13 16:06:19.067149 containerd[1932]: time="2025-02-13T16:06:19.067039919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:19.069641 containerd[1932]: time="2025-02-13T16:06:19.069544511Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 16:06:19.072680 containerd[1932]: time="2025-02-13T16:06:19.071687915Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:19.083755 containerd[1932]: time="2025-02-13T16:06:19.083689511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:19.088968 containerd[1932]: time="2025-02-13T16:06:19.088856927Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.842973497s" Feb 13 16:06:19.089249 containerd[1932]: time="2025-02-13T16:06:19.089211371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 16:06:19.131202 containerd[1932]: time="2025-02-13T16:06:19.131132291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 16:06:19.659800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511065498.mount: Deactivated successfully. Feb 13 16:06:19.680537 containerd[1932]: time="2025-02-13T16:06:19.679908518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:19.683555 containerd[1932]: time="2025-02-13T16:06:19.683497802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 16:06:19.687526 containerd[1932]: time="2025-02-13T16:06:19.686120006Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:19.691406 containerd[1932]: time="2025-02-13T16:06:19.691292438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:19.693609 containerd[1932]: time="2025-02-13T16:06:19.693440918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 561.796431ms" Feb 13 16:06:19.693609 containerd[1932]: time="2025-02-13T16:06:19.693600290Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 16:06:19.734705 containerd[1932]: time="2025-02-13T16:06:19.734646110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 16:06:20.436519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491598798.mount: Deactivated successfully. Feb 13 16:06:23.990425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 16:06:24.002025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:24.437123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:24.450729 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:24.574923 kubelet[2621]: E0213 16:06:24.574795 2621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:24.583076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:24.583595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:25.267776 containerd[1932]: time="2025-02-13T16:06:25.267667314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.270938 containerd[1932]: time="2025-02-13T16:06:25.270847086Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 16:06:25.273848 containerd[1932]: time="2025-02-13T16:06:25.273720378Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.281702 containerd[1932]: time="2025-02-13T16:06:25.281606250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.285340 containerd[1932]: time="2025-02-13T16:06:25.284385774Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.549462824s" Feb 13 16:06:25.285340 containerd[1932]: time="2025-02-13T16:06:25.284489202Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 16:06:28.742411 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 16:06:33.461196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:33.475751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:33.506698 systemd[1]: Reloading requested from client PID 2697 ('systemctl') (unit session-7.scope)... Feb 13 16:06:33.506736 systemd[1]: Reloading... Feb 13 16:06:33.692536 zram_generator::config[2736]: No configuration found. Feb 13 16:06:34.085207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:06:34.265119 systemd[1]: Reloading finished in 757 ms. Feb 13 16:06:34.364741 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:06:34.364938 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:06:34.365406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:34.373995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:34.660810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:34.679310 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:06:34.768160 kubelet[2800]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:34.768160 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:06:34.768160 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:34.769058 kubelet[2800]: I0213 16:06:34.768315 2800 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:06:36.219115 kubelet[2800]: I0213 16:06:36.219044 2800 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 16:06:36.219115 kubelet[2800]: I0213 16:06:36.219096 2800 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:06:36.219906 kubelet[2800]: I0213 16:06:36.219441 2800 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 16:06:36.252722 kubelet[2800]: E0213 16:06:36.252607 2800 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.253411 kubelet[2800]: I0213 16:06:36.253150 2800 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:06:36.267734 kubelet[2800]: I0213 16:06:36.267690 2800 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:06:36.270496 kubelet[2800]: I0213 16:06:36.270392 2800 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:06:36.270887 kubelet[2800]: I0213 16:06:36.270522 2800 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:06:36.271087 kubelet[2800]: I0213 16:06:36.270915 2800 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:06:36.271087 kubelet[2800]: I0213 16:06:36.270939 2800 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:06:36.271290 kubelet[2800]: I0213 16:06:36.271244 2800 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:36.273510 kubelet[2800]: I0213 16:06:36.273149 2800 kubelet.go:400] "Attempting to sync node with API server" Feb 13 16:06:36.273510 kubelet[2800]: I0213 16:06:36.273196 2800 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:06:36.273510 kubelet[2800]: I0213 16:06:36.273280 2800 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:06:36.273510 kubelet[2800]: I0213 16:06:36.273341 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:06:36.275637 kubelet[2800]: W0213 16:06:36.274327 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-147&limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.275637 kubelet[2800]: E0213 16:06:36.274409 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-147&limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.275637 kubelet[2800]: W0213 16:06:36.275053 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.275637 kubelet[2800]: E0213 16:06:36.275128 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.276155 kubelet[2800]: I0213 16:06:36.276126 2800 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:06:36.276622 kubelet[2800]: I0213 16:06:36.276587 2800 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:06:36.276852 kubelet[2800]: W0213 16:06:36.276828 2800 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:06:36.278099 kubelet[2800]: I0213 16:06:36.278061 2800 server.go:1264] "Started kubelet" Feb 13 16:06:36.288044 kubelet[2800]: E0213 16:06:36.287305 2800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.147:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.147:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-147.1823d0327828c940 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-147,UID:ip-172-31-18-147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-147,},FirstTimestamp:2025-02-13 16:06:36.278024512 +0000 UTC m=+1.590372597,LastTimestamp:2025-02-13 16:06:36.278024512 +0000 UTC m=+1.590372597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-147,}" Feb 13 16:06:36.288044 kubelet[2800]: I0213 16:06:36.287903 2800 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:06:36.289460 kubelet[2800]: I0213 16:06:36.289387 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:06:36.291772 kubelet[2800]: I0213 16:06:36.290138 2800 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:06:36.291772 kubelet[2800]: I0213 16:06:36.291222 2800 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:06:36.297026 kubelet[2800]: I0213 16:06:36.296975 2800 server.go:455] "Adding debug handlers to kubelet server" Feb 13 16:06:36.299682 kubelet[2800]: I0213 16:06:36.299640 2800 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:06:36.306870 kubelet[2800]: I0213 16:06:36.306821 2800 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 16:06:36.307222 kubelet[2800]: I0213 16:06:36.307194 2800 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:06:36.308052 kubelet[2800]: E0213 16:06:36.307945 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="200ms" Feb 13 16:06:36.310892 kubelet[2800]: I0213 16:06:36.309903 2800 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:06:36.310892 kubelet[2800]: I0213 16:06:36.310199 2800 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:06:36.315163 kubelet[2800]: W0213 16:06:36.314299 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.315163 kubelet[2800]: E0213 16:06:36.314453 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.315725 kubelet[2800]: E0213 16:06:36.315661 2800 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:06:36.318650 kubelet[2800]: I0213 16:06:36.318599 2800 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:06:36.342595 kubelet[2800]: I0213 16:06:36.342334 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:06:36.345438 kubelet[2800]: I0213 16:06:36.344904 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:06:36.345438 kubelet[2800]: I0213 16:06:36.344947 2800 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:06:36.345438 kubelet[2800]: I0213 16:06:36.344984 2800 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 16:06:36.345438 kubelet[2800]: E0213 16:06:36.345059 2800 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:06:36.357750 kubelet[2800]: W0213 16:06:36.357667 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.358142 kubelet[2800]: E0213 16:06:36.358089 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:36.374904 kubelet[2800]: I0213 16:06:36.374870 2800 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:06:36.375422 kubelet[2800]: I0213 16:06:36.375076 2800 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:06:36.375422 kubelet[2800]: I0213 16:06:36.375113 2800 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:36.378158 kubelet[2800]: I0213 16:06:36.377967 2800 policy_none.go:49] "None policy: Start" Feb 13 16:06:36.379552 kubelet[2800]: I0213 16:06:36.379514 2800 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:06:36.379669 kubelet[2800]: I0213 16:06:36.379563 2800 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:06:36.389757 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:06:36.405977 kubelet[2800]: I0213 16:06:36.405513 2800 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-147" Feb 13 16:06:36.406703 kubelet[2800]: E0213 16:06:36.406612 2800 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Feb 13 16:06:36.409028 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:06:36.426562 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:06:36.429454 kubelet[2800]: I0213 16:06:36.429379 2800 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:06:36.429852 kubelet[2800]: I0213 16:06:36.429777 2800 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:06:36.430138 kubelet[2800]: I0213 16:06:36.429994 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:06:36.434644 kubelet[2800]: E0213 16:06:36.434521 2800 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-147\" not found" Feb 13 16:06:36.446125 kubelet[2800]: I0213 16:06:36.445679 2800 topology_manager.go:215] "Topology Admit Handler" podUID="c3857cfb6ec1d22f878b75528c018d73" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-147" Feb 13 16:06:36.448074 kubelet[2800]: I0213 16:06:36.447986 2800 topology_manager.go:215] "Topology Admit Handler" podUID="68e15984833a02a6c3803ca81a1f1874" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:36.450876 kubelet[2800]: I0213 16:06:36.450822 2800 topology_manager.go:215] "Topology Admit Handler" podUID="166d9983f317bd0a3fb5ebe58b2ba50c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-147" Feb 13 16:06:36.467407 systemd[1]: Created slice kubepods-burstable-podc3857cfb6ec1d22f878b75528c018d73.slice - libcontainer container kubepods-burstable-podc3857cfb6ec1d22f878b75528c018d73.slice. Feb 13 16:06:36.497984 systemd[1]: Created slice kubepods-burstable-pod68e15984833a02a6c3803ca81a1f1874.slice - libcontainer container kubepods-burstable-pod68e15984833a02a6c3803ca81a1f1874.slice. Feb 13 16:06:36.509825 kubelet[2800]: E0213 16:06:36.509561 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="400ms" Feb 13 16:06:36.521070 systemd[1]: Created slice kubepods-burstable-pod166d9983f317bd0a3fb5ebe58b2ba50c.slice - libcontainer container kubepods-burstable-pod166d9983f317bd0a3fb5ebe58b2ba50c.slice. Feb 13 16:06:36.608375 kubelet[2800]: I0213 16:06:36.608323 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:36.609764 kubelet[2800]: I0213 16:06:36.608777 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/166d9983f317bd0a3fb5ebe58b2ba50c-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-147\" (UID: \"166d9983f317bd0a3fb5ebe58b2ba50c\") " pod="kube-system/kube-scheduler-ip-172-31-18-147" Feb 13 16:06:36.609764 kubelet[2800]: I0213 16:06:36.608892 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:36.609764 kubelet[2800]: I0213 16:06:36.609390 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:36.610244 kubelet[2800]: I0213 16:06:36.609818 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3857cfb6ec1d22f878b75528c018d73-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"c3857cfb6ec1d22f878b75528c018d73\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Feb 13 16:06:36.610244 kubelet[2800]: I0213 16:06:36.609908 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:36.610244 kubelet[2800]: I0213 16:06:36.609993 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:36.610244 kubelet[2800]: I0213 16:06:36.610075 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3857cfb6ec1d22f878b75528c018d73-ca-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"c3857cfb6ec1d22f878b75528c018d73\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Feb 13 16:06:36.610244 kubelet[2800]: I0213 16:06:36.610155 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3857cfb6ec1d22f878b75528c018d73-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"c3857cfb6ec1d22f878b75528c018d73\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Feb 13 16:06:36.610602 kubelet[2800]: I0213 16:06:36.609688 2800 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-147" Feb 13 16:06:36.611207 kubelet[2800]: E0213 16:06:36.611131 2800 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Feb 13 16:06:36.791619 containerd[1932]: time="2025-02-13T16:06:36.791568163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-147,Uid:c3857cfb6ec1d22f878b75528c018d73,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:36.814359 containerd[1932]: time="2025-02-13T16:06:36.813981931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-147,Uid:68e15984833a02a6c3803ca81a1f1874,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:36.826712 containerd[1932]: time="2025-02-13T16:06:36.826660459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-147,Uid:166d9983f317bd0a3fb5ebe58b2ba50c,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:36.912523 kubelet[2800]: E0213 16:06:36.910230 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="800ms" Feb 13 16:06:37.014403 kubelet[2800]: I0213 16:06:37.014332 2800 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-147" Feb 13 16:06:37.015033 kubelet[2800]: E0213 16:06:37.014855 2800 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Feb 13 16:06:37.130764 kubelet[2800]: W0213 16:06:37.130520 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.130764 kubelet[2800]: E0213 16:06:37.130598 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.139436 kubelet[2800]: W0213 16:06:37.139341 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.139436 kubelet[2800]: E0213 16:06:37.139441 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.322750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904523795.mount: Deactivated successfully. Feb 13 16:06:37.337565 containerd[1932]: time="2025-02-13T16:06:37.336680058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:37.343349 containerd[1932]: time="2025-02-13T16:06:37.343281558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 16:06:37.345834 containerd[1932]: time="2025-02-13T16:06:37.345616134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:37.348036 containerd[1932]: time="2025-02-13T16:06:37.347937618Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:37.352786 containerd[1932]: time="2025-02-13T16:06:37.352705950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:06:37.353148 containerd[1932]: time="2025-02-13T16:06:37.352914498Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:37.354630 containerd[1932]: time="2025-02-13T16:06:37.354565626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:06:37.357641 containerd[1932]: time="2025-02-13T16:06:37.357440946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:37.363053 containerd[1932]: time="2025-02-13T16:06:37.362320602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.093135ms" Feb 13 16:06:37.366567 containerd[1932]: time="2025-02-13T16:06:37.366497154Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.412339ms" Feb 13 16:06:37.376961 containerd[1932]: time="2025-02-13T16:06:37.376901034Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.814931ms" Feb 13 16:06:37.623689 kubelet[2800]: W0213 16:06:37.623593 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-147&limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.623689 kubelet[2800]: E0213 16:06:37.623693 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-147&limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.711280 containerd[1932]: time="2025-02-13T16:06:37.706728223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:37.711280 containerd[1932]: time="2025-02-13T16:06:37.710186503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:37.711280 containerd[1932]: time="2025-02-13T16:06:37.710219731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:37.711280 containerd[1932]: time="2025-02-13T16:06:37.710422783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:37.712120 kubelet[2800]: E0213 16:06:37.711760 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="1.6s" Feb 13 16:06:37.712903 containerd[1932]: time="2025-02-13T16:06:37.709297147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:37.712903 containerd[1932]: time="2025-02-13T16:06:37.709399831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:37.712903 containerd[1932]: time="2025-02-13T16:06:37.709437163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:37.712903 containerd[1932]: time="2025-02-13T16:06:37.712016683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:37.717074 containerd[1932]: time="2025-02-13T16:06:37.716380915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:37.717074 containerd[1932]: time="2025-02-13T16:06:37.716572375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:37.717074 containerd[1932]: time="2025-02-13T16:06:37.716620315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:37.717074 containerd[1932]: time="2025-02-13T16:06:37.716932711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:37.750105 kubelet[2800]: W0213 16:06:37.749266 2800 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.750105 kubelet[2800]: E0213 16:06:37.749376 2800 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:37.772812 systemd[1]: Started cri-containerd-9a8ca022a3ad5bc97cb8efe1d0f87bf8062cc73ee5b9654556ab38a5b39bbe2c.scope - libcontainer container 9a8ca022a3ad5bc97cb8efe1d0f87bf8062cc73ee5b9654556ab38a5b39bbe2c. Feb 13 16:06:37.788743 systemd[1]: Started cri-containerd-0ac1e02d35a12088b092ad2f972b8d19f13778bbc7ab88fe7b32a65f90f2d5c7.scope - libcontainer container 0ac1e02d35a12088b092ad2f972b8d19f13778bbc7ab88fe7b32a65f90f2d5c7. Feb 13 16:06:37.793603 systemd[1]: Started cri-containerd-81bdb0e858f9f44fe168fb055a454b6e58dcc9a1c2cf21944ef3dd41742ede5f.scope - libcontainer container 81bdb0e858f9f44fe168fb055a454b6e58dcc9a1c2cf21944ef3dd41742ede5f. Feb 13 16:06:37.819869 kubelet[2800]: I0213 16:06:37.819570 2800 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-147" Feb 13 16:06:37.821854 kubelet[2800]: E0213 16:06:37.821753 2800 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Feb 13 16:06:37.904595 containerd[1932]: time="2025-02-13T16:06:37.904210292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-147,Uid:68e15984833a02a6c3803ca81a1f1874,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a8ca022a3ad5bc97cb8efe1d0f87bf8062cc73ee5b9654556ab38a5b39bbe2c\"" Feb 13 16:06:37.918748 containerd[1932]: time="2025-02-13T16:06:37.917988884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-147,Uid:c3857cfb6ec1d22f878b75528c018d73,Namespace:kube-system,Attempt:0,} returns sandbox id \"81bdb0e858f9f44fe168fb055a454b6e58dcc9a1c2cf21944ef3dd41742ede5f\"" Feb 13 16:06:37.918748 containerd[1932]: time="2025-02-13T16:06:37.918173336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-147,Uid:166d9983f317bd0a3fb5ebe58b2ba50c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ac1e02d35a12088b092ad2f972b8d19f13778bbc7ab88fe7b32a65f90f2d5c7\"" Feb 13 16:06:37.920634 containerd[1932]: time="2025-02-13T16:06:37.920240528Z" level=info msg="CreateContainer within sandbox \"9a8ca022a3ad5bc97cb8efe1d0f87bf8062cc73ee5b9654556ab38a5b39bbe2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:06:37.934683 containerd[1932]: time="2025-02-13T16:06:37.933843308Z" level=info msg="CreateContainer within sandbox \"0ac1e02d35a12088b092ad2f972b8d19f13778bbc7ab88fe7b32a65f90f2d5c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:06:37.934683 containerd[1932]: time="2025-02-13T16:06:37.933906848Z" level=info msg="CreateContainer within sandbox \"81bdb0e858f9f44fe168fb055a454b6e58dcc9a1c2cf21944ef3dd41742ede5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:06:37.971068 containerd[1932]: time="2025-02-13T16:06:37.971001285Z" level=info msg="CreateContainer within sandbox \"9a8ca022a3ad5bc97cb8efe1d0f87bf8062cc73ee5b9654556ab38a5b39bbe2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae\"" Feb 13 16:06:37.972399 containerd[1932]: time="2025-02-13T16:06:37.972347625Z" level=info msg="StartContainer for \"c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae\"" Feb 13 16:06:37.990500 containerd[1932]: time="2025-02-13T16:06:37.989973045Z" level=info msg="CreateContainer within sandbox \"81bdb0e858f9f44fe168fb055a454b6e58dcc9a1c2cf21944ef3dd41742ede5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"807060598da52f3111b7d00aabe3a37a02183f9798c738059022c9ad8f23120d\"" Feb 13 16:06:37.992137 containerd[1932]: time="2025-02-13T16:06:37.991945149Z" level=info msg="StartContainer for \"807060598da52f3111b7d00aabe3a37a02183f9798c738059022c9ad8f23120d\"" Feb 13 16:06:38.008220 containerd[1932]: time="2025-02-13T16:06:38.008043257Z" level=info msg="CreateContainer within sandbox \"0ac1e02d35a12088b092ad2f972b8d19f13778bbc7ab88fe7b32a65f90f2d5c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"758cb2bc76a0ec3368a9e0534d6ab40c7065f4ebdc13d193265fb18c78c4b6b4\"" Feb 13 16:06:38.009838 containerd[1932]: time="2025-02-13T16:06:38.009776885Z" level=info msg="StartContainer for \"758cb2bc76a0ec3368a9e0534d6ab40c7065f4ebdc13d193265fb18c78c4b6b4\"" Feb 13 16:06:38.055062 systemd[1]: Started cri-containerd-c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae.scope - libcontainer container c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae. Feb 13 16:06:38.100981 systemd[1]: Started cri-containerd-807060598da52f3111b7d00aabe3a37a02183f9798c738059022c9ad8f23120d.scope - libcontainer container 807060598da52f3111b7d00aabe3a37a02183f9798c738059022c9ad8f23120d. Feb 13 16:06:38.145314 systemd[1]: Started cri-containerd-758cb2bc76a0ec3368a9e0534d6ab40c7065f4ebdc13d193265fb18c78c4b6b4.scope - libcontainer container 758cb2bc76a0ec3368a9e0534d6ab40c7065f4ebdc13d193265fb18c78c4b6b4. Feb 13 16:06:38.248157 containerd[1932]: time="2025-02-13T16:06:38.247226274Z" level=info msg="StartContainer for \"c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae\" returns successfully" Feb 13 16:06:38.265310 kubelet[2800]: E0213 16:06:38.264712 2800 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.147:6443: connect: connection refused Feb 13 16:06:38.295040 containerd[1932]: time="2025-02-13T16:06:38.294923298Z" level=info msg="StartContainer for \"807060598da52f3111b7d00aabe3a37a02183f9798c738059022c9ad8f23120d\" returns successfully" Feb 13 16:06:38.308141 containerd[1932]: time="2025-02-13T16:06:38.308039826Z" level=info msg="StartContainer for \"758cb2bc76a0ec3368a9e0534d6ab40c7065f4ebdc13d193265fb18c78c4b6b4\" returns successfully" Feb 13 16:06:39.426029 kubelet[2800]: I0213 16:06:39.425961 2800 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-147" Feb 13 16:06:42.166106 kubelet[2800]: E0213 16:06:42.166038 2800 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Feb 13 16:06:42.279108 kubelet[2800]: I0213 16:06:42.279051 2800 apiserver.go:52] "Watching apiserver" Feb 13 16:06:42.308099 kubelet[2800]: I0213 16:06:42.308037 2800 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 16:06:42.312301 kubelet[2800]: I0213 16:06:42.312247 2800 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-147" Feb 13 16:06:42.647617 update_engine[1914]: I20250213 16:06:42.647516 1914 update_attempter.cc:509] Updating boot flags... Feb 13 16:06:42.804499 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3090) Feb 13 16:06:44.639455 systemd[1]: Reloading requested from client PID 3174 ('systemctl') (unit session-7.scope)... Feb 13 16:06:44.639980 systemd[1]: Reloading... Feb 13 16:06:44.913604 zram_generator::config[3214]: No configuration found. Feb 13 16:06:45.216648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:06:45.521654 systemd[1]: Reloading finished in 880 ms. Feb 13 16:06:45.669984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:45.688518 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:06:45.689495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:45.689874 systemd[1]: kubelet.service: Consumed 2.396s CPU time, 113.0M memory peak, 0B memory swap peak. Feb 13 16:06:45.705747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:46.043817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:46.057340 (kubelet)[3274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:06:46.176382 kubelet[3274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:46.176382 kubelet[3274]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:06:46.176382 kubelet[3274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:46.176382 kubelet[3274]: I0213 16:06:46.175854 3274 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:06:46.190512 kubelet[3274]: I0213 16:06:46.189357 3274 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 16:06:46.190512 kubelet[3274]: I0213 16:06:46.189399 3274 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:06:46.190512 kubelet[3274]: I0213 16:06:46.189789 3274 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 16:06:46.193692 kubelet[3274]: I0213 16:06:46.193653 3274 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:06:46.197089 kubelet[3274]: I0213 16:06:46.197042 3274 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:06:46.217459 kubelet[3274]: I0213 16:06:46.217415 3274 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:06:46.220246 kubelet[3274]: I0213 16:06:46.219609 3274 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:06:46.220246 kubelet[3274]: I0213 16:06:46.219677 3274 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:06:46.220246 kubelet[3274]: I0213 16:06:46.219983 3274 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:06:46.220246 kubelet[3274]: I0213 16:06:46.220004 3274 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:06:46.220246 kubelet[3274]: I0213 16:06:46.220072 3274 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:46.221776 kubelet[3274]: I0213 16:06:46.220819 3274 kubelet.go:400] "Attempting to sync node with API server" Feb 13 16:06:46.221776 kubelet[3274]: I0213 16:06:46.221550 3274 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:06:46.221776 kubelet[3274]: I0213 16:06:46.221632 3274 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:06:46.221776 kubelet[3274]: I0213 16:06:46.221664 3274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:06:46.227329 kubelet[3274]: I0213 16:06:46.227070 3274 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:06:46.229748 kubelet[3274]: I0213 16:06:46.229706 3274 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:06:46.234297 kubelet[3274]: I0213 16:06:46.230672 3274 server.go:1264] "Started kubelet" Feb 13 16:06:46.241611 kubelet[3274]: I0213 16:06:46.240216 3274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:06:46.248238 kubelet[3274]: I0213 16:06:46.248127 3274 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:06:46.254550 kubelet[3274]: I0213 16:06:46.251607 3274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:06:46.254550 kubelet[3274]: I0213 16:06:46.252198 3274 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:06:46.254550 kubelet[3274]: I0213 16:06:46.254429 3274 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:06:46.258055 kubelet[3274]: I0213 16:06:46.257940 3274 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 16:06:46.264517 kubelet[3274]: I0213 16:06:46.258314 3274 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:06:46.264517 kubelet[3274]: I0213 16:06:46.259282 3274 server.go:455] "Adding debug handlers to kubelet server" Feb 13 16:06:46.286348 kubelet[3274]: I0213 16:06:46.286240 3274 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:06:46.286728 kubelet[3274]: I0213 16:06:46.286629 3274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:06:46.302943 kubelet[3274]: E0213 16:06:46.302429 3274 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:06:46.309770 kubelet[3274]: I0213 16:06:46.309050 3274 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:06:46.385852 kubelet[3274]: E0213 16:06:46.385779 3274 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 13 16:06:46.397298 kubelet[3274]: I0213 16:06:46.396889 3274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:06:46.402594 kubelet[3274]: I0213 16:06:46.401244 3274 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-147" Feb 13 16:06:46.412334 kubelet[3274]: I0213 16:06:46.410886 3274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:06:46.415748 kubelet[3274]: I0213 16:06:46.415597 3274 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:06:46.415748 kubelet[3274]: I0213 16:06:46.415654 3274 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 16:06:46.415958 kubelet[3274]: E0213 16:06:46.415745 3274 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:06:46.493013 kubelet[3274]: I0213 16:06:46.492670 3274 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-147" Feb 13 16:06:46.493013 kubelet[3274]: I0213 16:06:46.492801 3274 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-147" Feb 13 16:06:46.520214 kubelet[3274]: E0213 16:06:46.518794 3274 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:06:46.580293 kubelet[3274]: I0213 16:06:46.580145 3274 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:06:46.580293 kubelet[3274]: I0213 16:06:46.580180 3274 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:06:46.580293 kubelet[3274]: I0213 16:06:46.580217 3274 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:46.581660 kubelet[3274]: I0213 16:06:46.581184 3274 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:06:46.581660 kubelet[3274]: I0213 16:06:46.581263 3274 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:06:46.581660 kubelet[3274]: I0213 16:06:46.581336 3274 policy_none.go:49] "None policy: Start" Feb 13 16:06:46.583809 kubelet[3274]: I0213 16:06:46.583609 3274 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:06:46.583809 kubelet[3274]: I0213 16:06:46.583654 3274 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:06:46.584271 kubelet[3274]: I0213 16:06:46.583936 3274 state_mem.go:75] "Updated machine memory state" Feb 13 16:06:46.596178 kubelet[3274]: I0213 16:06:46.596120 3274 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:06:46.596559 kubelet[3274]: I0213 16:06:46.596403 3274 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:06:46.599915 kubelet[3274]: I0213 16:06:46.598700 3274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:06:46.720346 kubelet[3274]: I0213 16:06:46.719514 3274 topology_manager.go:215] "Topology Admit Handler" podUID="166d9983f317bd0a3fb5ebe58b2ba50c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-147" Feb 13 16:06:46.720346 kubelet[3274]: I0213 16:06:46.719746 3274 topology_manager.go:215] "Topology Admit Handler" podUID="c3857cfb6ec1d22f878b75528c018d73" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-147" Feb 13 16:06:46.720346 kubelet[3274]: I0213 16:06:46.719818 3274 topology_manager.go:215] "Topology Admit Handler" podUID="68e15984833a02a6c3803ca81a1f1874" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:46.738701 kubelet[3274]: E0213 16:06:46.738620 3274 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-147\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-147" Feb 13 16:06:46.761572 kubelet[3274]: I0213 16:06:46.761344 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:46.761752 kubelet[3274]: I0213 16:06:46.761603 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:46.761818 kubelet[3274]: I0213 16:06:46.761715 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:46.761881 kubelet[3274]: I0213 16:06:46.761832 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3857cfb6ec1d22f878b75528c018d73-ca-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"c3857cfb6ec1d22f878b75528c018d73\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Feb 13 16:06:46.762809 kubelet[3274]: I0213 16:06:46.761890 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3857cfb6ec1d22f878b75528c018d73-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"c3857cfb6ec1d22f878b75528c018d73\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Feb 13 16:06:46.762809 kubelet[3274]: I0213 16:06:46.762578 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:46.762809 kubelet[3274]: I0213 16:06:46.762625 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68e15984833a02a6c3803ca81a1f1874-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"68e15984833a02a6c3803ca81a1f1874\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Feb 13 16:06:46.762809 kubelet[3274]: I0213 16:06:46.762693 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/166d9983f317bd0a3fb5ebe58b2ba50c-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-147\" (UID: \"166d9983f317bd0a3fb5ebe58b2ba50c\") " pod="kube-system/kube-scheduler-ip-172-31-18-147" Feb 13 16:06:46.762809 kubelet[3274]: I0213 16:06:46.762753 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3857cfb6ec1d22f878b75528c018d73-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"c3857cfb6ec1d22f878b75528c018d73\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Feb 13 16:06:47.226039 kubelet[3274]: I0213 16:06:47.225972 3274 apiserver.go:52] "Watching apiserver" Feb 13 16:06:47.260296 kubelet[3274]: I0213 16:06:47.260051 3274 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 16:06:47.291874 kubelet[3274]: I0213 16:06:47.291792 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-147" podStartSLOduration=1.2917719509999999 podStartE2EDuration="1.291771951s" podCreationTimestamp="2025-02-13 16:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:47.291627159 +0000 UTC m=+1.224197551" watchObservedRunningTime="2025-02-13 16:06:47.291771951 +0000 UTC m=+1.224342343" Feb 13 16:06:47.330993 kubelet[3274]: I0213 16:06:47.329970 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-147" podStartSLOduration=1.3299368710000001 podStartE2EDuration="1.329936871s" podCreationTimestamp="2025-02-13 16:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:47.307058763 +0000 UTC m=+1.239629155" watchObservedRunningTime="2025-02-13 16:06:47.329936871 +0000 UTC m=+1.262507275" Feb 13 16:06:47.542655 kubelet[3274]: E0213 16:06:47.541705 3274 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-147\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-147" Feb 13 16:06:47.569124 kubelet[3274]: I0213 16:06:47.569013 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-147" podStartSLOduration=2.5689841920000003 podStartE2EDuration="2.568984192s" podCreationTimestamp="2025-02-13 16:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:47.331576359 +0000 UTC m=+1.264146943" watchObservedRunningTime="2025-02-13 16:06:47.568984192 +0000 UTC m=+1.501554584" Feb 13 16:06:51.824427 sudo[2251]: pam_unix(sudo:session): session closed for user root Feb 13 16:06:51.847808 sshd[2248]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:51.853284 systemd[1]: sshd@6-172.31.18.147:22-139.178.68.195:52132.service: Deactivated successfully. Feb 13 16:06:51.857441 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:06:51.858103 systemd[1]: session-7.scope: Consumed 11.889s CPU time, 187.1M memory peak, 0B memory swap peak. Feb 13 16:06:51.861015 systemd-logind[1913]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:06:51.864815 systemd-logind[1913]: Removed session 7. Feb 13 16:06:58.933107 kubelet[3274]: I0213 16:06:58.933055 3274 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:06:58.935189 containerd[1932]: time="2025-02-13T16:06:58.935032505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:06:58.936673 kubelet[3274]: I0213 16:06:58.935598 3274 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:06:59.622521 kubelet[3274]: I0213 16:06:59.620507 3274 topology_manager.go:215] "Topology Admit Handler" podUID="8713e84f-4fa5-48c1-8c48-8574b5a13cd1" podNamespace="kube-system" podName="kube-proxy-hn4w8" Feb 13 16:06:59.647724 systemd[1]: Created slice kubepods-besteffort-pod8713e84f_4fa5_48c1_8c48_8574b5a13cd1.slice - libcontainer container kubepods-besteffort-pod8713e84f_4fa5_48c1_8c48_8574b5a13cd1.slice. Feb 13 16:06:59.653822 kubelet[3274]: I0213 16:06:59.652840 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8713e84f-4fa5-48c1-8c48-8574b5a13cd1-xtables-lock\") pod \"kube-proxy-hn4w8\" (UID: \"8713e84f-4fa5-48c1-8c48-8574b5a13cd1\") " pod="kube-system/kube-proxy-hn4w8" Feb 13 16:06:59.653822 kubelet[3274]: I0213 16:06:59.653653 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8713e84f-4fa5-48c1-8c48-8574b5a13cd1-kube-proxy\") pod \"kube-proxy-hn4w8\" (UID: \"8713e84f-4fa5-48c1-8c48-8574b5a13cd1\") " pod="kube-system/kube-proxy-hn4w8" Feb 13 16:06:59.654138 kubelet[3274]: I0213 16:06:59.653894 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8713e84f-4fa5-48c1-8c48-8574b5a13cd1-lib-modules\") pod \"kube-proxy-hn4w8\" (UID: \"8713e84f-4fa5-48c1-8c48-8574b5a13cd1\") " pod="kube-system/kube-proxy-hn4w8" Feb 13 16:06:59.654557 kubelet[3274]: I0213 16:06:59.654368 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgcq2\" (UniqueName: \"kubernetes.io/projected/8713e84f-4fa5-48c1-8c48-8574b5a13cd1-kube-api-access-lgcq2\") pod \"kube-proxy-hn4w8\" (UID: \"8713e84f-4fa5-48c1-8c48-8574b5a13cd1\") " pod="kube-system/kube-proxy-hn4w8" Feb 13 16:06:59.857590 kubelet[3274]: I0213 16:06:59.857516 3274 topology_manager.go:215] "Topology Admit Handler" podUID="baf1aa4d-aaa8-4f50-95c3-d433cc399ce9" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-r5mpj" Feb 13 16:06:59.879395 systemd[1]: Created slice kubepods-besteffort-podbaf1aa4d_aaa8_4f50_95c3_d433cc399ce9.slice - libcontainer container kubepods-besteffort-podbaf1aa4d_aaa8_4f50_95c3_d433cc399ce9.slice. Feb 13 16:06:59.957215 kubelet[3274]: I0213 16:06:59.957093 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/baf1aa4d-aaa8-4f50-95c3-d433cc399ce9-var-lib-calico\") pod \"tigera-operator-7bc55997bb-r5mpj\" (UID: \"baf1aa4d-aaa8-4f50-95c3-d433cc399ce9\") " pod="tigera-operator/tigera-operator-7bc55997bb-r5mpj" Feb 13 16:06:59.957215 kubelet[3274]: I0213 16:06:59.957178 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjnq\" (UniqueName: \"kubernetes.io/projected/baf1aa4d-aaa8-4f50-95c3-d433cc399ce9-kube-api-access-5wjnq\") pod \"tigera-operator-7bc55997bb-r5mpj\" (UID: \"baf1aa4d-aaa8-4f50-95c3-d433cc399ce9\") " pod="tigera-operator/tigera-operator-7bc55997bb-r5mpj" Feb 13 16:06:59.964690 containerd[1932]: time="2025-02-13T16:06:59.964578018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hn4w8,Uid:8713e84f-4fa5-48c1-8c48-8574b5a13cd1,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:00.027701 containerd[1932]: time="2025-02-13T16:07:00.027305786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:00.027701 containerd[1932]: time="2025-02-13T16:07:00.027447134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:00.028388 containerd[1932]: time="2025-02-13T16:07:00.028151162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:00.028667 containerd[1932]: time="2025-02-13T16:07:00.028401782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:00.072966 systemd[1]: run-containerd-runc-k8s.io-3892df7e46a7f52ef2394b84add6e56abdb2eda590c968eec5e73a9f7bcd260b-runc.fcJex0.mount: Deactivated successfully. Feb 13 16:07:00.086009 systemd[1]: Started cri-containerd-3892df7e46a7f52ef2394b84add6e56abdb2eda590c968eec5e73a9f7bcd260b.scope - libcontainer container 3892df7e46a7f52ef2394b84add6e56abdb2eda590c968eec5e73a9f7bcd260b. Feb 13 16:07:00.132186 containerd[1932]: time="2025-02-13T16:07:00.132012207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hn4w8,Uid:8713e84f-4fa5-48c1-8c48-8574b5a13cd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3892df7e46a7f52ef2394b84add6e56abdb2eda590c968eec5e73a9f7bcd260b\"" Feb 13 16:07:00.140172 containerd[1932]: time="2025-02-13T16:07:00.140059743Z" level=info msg="CreateContainer within sandbox \"3892df7e46a7f52ef2394b84add6e56abdb2eda590c968eec5e73a9f7bcd260b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:07:00.172063 containerd[1932]: time="2025-02-13T16:07:00.171863211Z" level=info msg="CreateContainer within sandbox \"3892df7e46a7f52ef2394b84add6e56abdb2eda590c968eec5e73a9f7bcd260b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"541274f7e7288e99542862121af86d1cecee5932d3623f8f55a7498f36abd6e4\"" Feb 13 16:07:00.173634 containerd[1932]: time="2025-02-13T16:07:00.173314167Z" level=info msg="StartContainer for \"541274f7e7288e99542862121af86d1cecee5932d3623f8f55a7498f36abd6e4\"" Feb 13 16:07:00.194568 containerd[1932]: time="2025-02-13T16:07:00.193026015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r5mpj,Uid:baf1aa4d-aaa8-4f50-95c3-d433cc399ce9,Namespace:tigera-operator,Attempt:0,}" Feb 13 16:07:00.229062 systemd[1]: Started cri-containerd-541274f7e7288e99542862121af86d1cecee5932d3623f8f55a7498f36abd6e4.scope - libcontainer container 541274f7e7288e99542862121af86d1cecee5932d3623f8f55a7498f36abd6e4. Feb 13 16:07:00.258453 containerd[1932]: time="2025-02-13T16:07:00.257753895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:00.258453 containerd[1932]: time="2025-02-13T16:07:00.258330675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:00.258453 containerd[1932]: time="2025-02-13T16:07:00.258372183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:00.259246 containerd[1932]: time="2025-02-13T16:07:00.259145475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:00.309854 systemd[1]: Started cri-containerd-457079ba2a52c1156a46b836595d85ffd3231550d96cf610a71b247a8a1e7616.scope - libcontainer container 457079ba2a52c1156a46b836595d85ffd3231550d96cf610a71b247a8a1e7616. Feb 13 16:07:00.338776 containerd[1932]: time="2025-02-13T16:07:00.338694304Z" level=info msg="StartContainer for \"541274f7e7288e99542862121af86d1cecee5932d3623f8f55a7498f36abd6e4\" returns successfully" Feb 13 16:07:00.400132 containerd[1932]: time="2025-02-13T16:07:00.399910444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r5mpj,Uid:baf1aa4d-aaa8-4f50-95c3-d433cc399ce9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"457079ba2a52c1156a46b836595d85ffd3231550d96cf610a71b247a8a1e7616\"" Feb 13 16:07:00.406294 containerd[1932]: time="2025-02-13T16:07:00.405965176Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 16:07:04.852731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320766158.mount: Deactivated successfully. Feb 13 16:07:05.536483 containerd[1932]: time="2025-02-13T16:07:05.536370718Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:05.538596 containerd[1932]: time="2025-02-13T16:07:05.538456822Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 16:07:05.543308 containerd[1932]: time="2025-02-13T16:07:05.541739014Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:05.557575 containerd[1932]: time="2025-02-13T16:07:05.557434066Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:05.559767 containerd[1932]: time="2025-02-13T16:07:05.559704322Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 5.15365613s" Feb 13 16:07:05.559997 containerd[1932]: time="2025-02-13T16:07:05.559954858Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 16:07:05.565830 containerd[1932]: time="2025-02-13T16:07:05.565755922Z" level=info msg="CreateContainer within sandbox \"457079ba2a52c1156a46b836595d85ffd3231550d96cf610a71b247a8a1e7616\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 16:07:05.597713 containerd[1932]: time="2025-02-13T16:07:05.597609250Z" level=info msg="CreateContainer within sandbox \"457079ba2a52c1156a46b836595d85ffd3231550d96cf610a71b247a8a1e7616\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fceb5d97a6cfc4d1b5a709105303a3c31d08eb2aa57b2a9eafa1085f89cdd592\"" Feb 13 16:07:05.599505 containerd[1932]: time="2025-02-13T16:07:05.598383070Z" level=info msg="StartContainer for \"fceb5d97a6cfc4d1b5a709105303a3c31d08eb2aa57b2a9eafa1085f89cdd592\"" Feb 13 16:07:05.659920 systemd[1]: Started cri-containerd-fceb5d97a6cfc4d1b5a709105303a3c31d08eb2aa57b2a9eafa1085f89cdd592.scope - libcontainer container fceb5d97a6cfc4d1b5a709105303a3c31d08eb2aa57b2a9eafa1085f89cdd592. Feb 13 16:07:05.717934 containerd[1932]: time="2025-02-13T16:07:05.717757762Z" level=info msg="StartContainer for \"fceb5d97a6cfc4d1b5a709105303a3c31d08eb2aa57b2a9eafa1085f89cdd592\" returns successfully" Feb 13 16:07:06.437456 kubelet[3274]: I0213 16:07:06.437372 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hn4w8" podStartSLOduration=7.437349202 podStartE2EDuration="7.437349202s" podCreationTimestamp="2025-02-13 16:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:00.567799505 +0000 UTC m=+14.500369909" watchObservedRunningTime="2025-02-13 16:07:06.437349202 +0000 UTC m=+20.369919594" Feb 13 16:07:11.287060 kubelet[3274]: I0213 16:07:11.285181 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-r5mpj" podStartSLOduration=7.12677432 podStartE2EDuration="12.285155426s" podCreationTimestamp="2025-02-13 16:06:59 +0000 UTC" firstStartedPulling="2025-02-13 16:07:00.403831792 +0000 UTC m=+14.336402172" lastFinishedPulling="2025-02-13 16:07:05.562212886 +0000 UTC m=+19.494783278" observedRunningTime="2025-02-13 16:07:06.591268451 +0000 UTC m=+20.523838843" watchObservedRunningTime="2025-02-13 16:07:11.285155426 +0000 UTC m=+25.217725842" Feb 13 16:07:11.287060 kubelet[3274]: I0213 16:07:11.285564 3274 topology_manager.go:215] "Topology Admit Handler" podUID="31c540e0-bcf4-4991-8cc6-cd2997604431" podNamespace="calico-system" podName="calico-typha-58bc8f4d67-5v7cc" Feb 13 16:07:11.305102 kubelet[3274]: W0213 16:07:11.304323 3274 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:11.305102 kubelet[3274]: E0213 16:07:11.304397 3274 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:11.305102 kubelet[3274]: W0213 16:07:11.304501 3274 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:11.305102 kubelet[3274]: E0213 16:07:11.304535 3274 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:11.305102 kubelet[3274]: W0213 16:07:11.304595 3274 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:11.307199 kubelet[3274]: E0213 16:07:11.304621 3274 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:11.320853 systemd[1]: Created slice kubepods-besteffort-pod31c540e0_bcf4_4991_8cc6_cd2997604431.slice - libcontainer container kubepods-besteffort-pod31c540e0_bcf4_4991_8cc6_cd2997604431.slice. Feb 13 16:07:11.332832 kubelet[3274]: I0213 16:07:11.332687 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c540e0-bcf4-4991-8cc6-cd2997604431-tigera-ca-bundle\") pod \"calico-typha-58bc8f4d67-5v7cc\" (UID: \"31c540e0-bcf4-4991-8cc6-cd2997604431\") " pod="calico-system/calico-typha-58bc8f4d67-5v7cc" Feb 13 16:07:11.332832 kubelet[3274]: I0213 16:07:11.332786 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/31c540e0-bcf4-4991-8cc6-cd2997604431-typha-certs\") pod \"calico-typha-58bc8f4d67-5v7cc\" (UID: \"31c540e0-bcf4-4991-8cc6-cd2997604431\") " pod="calico-system/calico-typha-58bc8f4d67-5v7cc" Feb 13 16:07:11.332832 kubelet[3274]: I0213 16:07:11.332838 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvn8\" (UniqueName: \"kubernetes.io/projected/31c540e0-bcf4-4991-8cc6-cd2997604431-kube-api-access-vrvn8\") pod \"calico-typha-58bc8f4d67-5v7cc\" (UID: \"31c540e0-bcf4-4991-8cc6-cd2997604431\") " pod="calico-system/calico-typha-58bc8f4d67-5v7cc" Feb 13 16:07:11.493602 kubelet[3274]: I0213 16:07:11.493379 3274 topology_manager.go:215] "Topology Admit Handler" podUID="ffd808bf-1f3a-457d-a86e-3118bee8ab30" podNamespace="calico-system" podName="calico-node-mlcn4" Feb 13 16:07:11.525460 systemd[1]: Created slice kubepods-besteffort-podffd808bf_1f3a_457d_a86e_3118bee8ab30.slice - libcontainer container kubepods-besteffort-podffd808bf_1f3a_457d_a86e_3118bee8ab30.slice. Feb 13 16:07:11.536596 kubelet[3274]: I0213 16:07:11.535326 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-lib-modules\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.536596 kubelet[3274]: I0213 16:07:11.535420 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-policysync\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.536596 kubelet[3274]: I0213 16:07:11.535547 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-cni-log-dir\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.536596 kubelet[3274]: I0213 16:07:11.535631 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g275l\" (UniqueName: \"kubernetes.io/projected/ffd808bf-1f3a-457d-a86e-3118bee8ab30-kube-api-access-g275l\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.536596 kubelet[3274]: I0213 16:07:11.535686 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-flexvol-driver-host\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.537039 kubelet[3274]: I0213 16:07:11.535744 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-cni-net-dir\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.537039 kubelet[3274]: I0213 16:07:11.535815 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-xtables-lock\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.537039 kubelet[3274]: I0213 16:07:11.535863 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-var-lib-calico\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.537039 kubelet[3274]: I0213 16:07:11.535933 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ffd808bf-1f3a-457d-a86e-3118bee8ab30-node-certs\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.537039 kubelet[3274]: I0213 16:07:11.535989 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-var-run-calico\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.539595 kubelet[3274]: I0213 16:07:11.536056 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffd808bf-1f3a-457d-a86e-3118bee8ab30-tigera-ca-bundle\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.539595 kubelet[3274]: I0213 16:07:11.536107 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ffd808bf-1f3a-457d-a86e-3118bee8ab30-cni-bin-dir\") pod \"calico-node-mlcn4\" (UID: \"ffd808bf-1f3a-457d-a86e-3118bee8ab30\") " pod="calico-system/calico-node-mlcn4" Feb 13 16:07:11.642128 kubelet[3274]: E0213 16:07:11.642065 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.642265 kubelet[3274]: W0213 16:07:11.642112 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.642915 kubelet[3274]: E0213 16:07:11.642302 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.643880 kubelet[3274]: E0213 16:07:11.643385 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.644057 kubelet[3274]: W0213 16:07:11.643870 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.646690 kubelet[3274]: E0213 16:07:11.646585 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.647094 kubelet[3274]: E0213 16:07:11.647049 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.647259 kubelet[3274]: W0213 16:07:11.647084 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.647259 kubelet[3274]: E0213 16:07:11.647147 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.648877 kubelet[3274]: E0213 16:07:11.648773 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.648877 kubelet[3274]: W0213 16:07:11.648865 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.649498 kubelet[3274]: E0213 16:07:11.648965 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.650856 kubelet[3274]: E0213 16:07:11.650660 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.650856 kubelet[3274]: W0213 16:07:11.650847 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.651672 kubelet[3274]: E0213 16:07:11.651373 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.654062 kubelet[3274]: E0213 16:07:11.653573 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.654062 kubelet[3274]: W0213 16:07:11.653620 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.654062 kubelet[3274]: E0213 16:07:11.654079 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.655706 kubelet[3274]: E0213 16:07:11.655639 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.655855 kubelet[3274]: W0213 16:07:11.655798 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.655855 kubelet[3274]: E0213 16:07:11.655834 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.659674 kubelet[3274]: E0213 16:07:11.659583 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.659928 kubelet[3274]: W0213 16:07:11.659778 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.660014 kubelet[3274]: E0213 16:07:11.659974 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.673987 kubelet[3274]: E0213 16:07:11.673710 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.674825 kubelet[3274]: W0213 16:07:11.674160 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.674825 kubelet[3274]: E0213 16:07:11.674216 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.683664 kubelet[3274]: I0213 16:07:11.683592 3274 topology_manager.go:215] "Topology Admit Handler" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" podNamespace="calico-system" podName="csi-node-driver-gphn8" Feb 13 16:07:11.684099 kubelet[3274]: E0213 16:07:11.684022 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:11.719610 kubelet[3274]: E0213 16:07:11.719528 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.719610 kubelet[3274]: W0213 16:07:11.719588 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.719867 kubelet[3274]: E0213 16:07:11.719635 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.720183 kubelet[3274]: E0213 16:07:11.720143 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.720183 kubelet[3274]: W0213 16:07:11.720175 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.720324 kubelet[3274]: E0213 16:07:11.720201 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.722361 kubelet[3274]: E0213 16:07:11.722309 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.722361 kubelet[3274]: W0213 16:07:11.722348 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.722572 kubelet[3274]: E0213 16:07:11.722384 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.724067 kubelet[3274]: E0213 16:07:11.723979 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.724067 kubelet[3274]: W0213 16:07:11.724042 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.724560 kubelet[3274]: E0213 16:07:11.724094 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.725107 kubelet[3274]: E0213 16:07:11.725054 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.725107 kubelet[3274]: W0213 16:07:11.725095 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.725292 kubelet[3274]: E0213 16:07:11.725131 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.726891 kubelet[3274]: E0213 16:07:11.726645 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.726891 kubelet[3274]: W0213 16:07:11.726686 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.726891 kubelet[3274]: E0213 16:07:11.726723 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.727880 kubelet[3274]: E0213 16:07:11.727136 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.727880 kubelet[3274]: W0213 16:07:11.727162 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.727880 kubelet[3274]: E0213 16:07:11.727190 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.727880 kubelet[3274]: E0213 16:07:11.727684 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.727880 kubelet[3274]: W0213 16:07:11.727714 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.727880 kubelet[3274]: E0213 16:07:11.727745 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.729070 kubelet[3274]: E0213 16:07:11.728287 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.729070 kubelet[3274]: W0213 16:07:11.728341 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.729070 kubelet[3274]: E0213 16:07:11.728383 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.730031 kubelet[3274]: E0213 16:07:11.729973 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.730031 kubelet[3274]: W0213 16:07:11.730017 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.730280 kubelet[3274]: E0213 16:07:11.730062 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.730725 kubelet[3274]: E0213 16:07:11.730630 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.730725 kubelet[3274]: W0213 16:07:11.730692 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.731797 kubelet[3274]: E0213 16:07:11.730732 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.731797 kubelet[3274]: E0213 16:07:11.731595 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.731797 kubelet[3274]: W0213 16:07:11.731625 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.731797 kubelet[3274]: E0213 16:07:11.731658 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.733978 kubelet[3274]: E0213 16:07:11.733082 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.733978 kubelet[3274]: W0213 16:07:11.733137 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.733978 kubelet[3274]: E0213 16:07:11.733185 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.736229 kubelet[3274]: E0213 16:07:11.735968 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.736229 kubelet[3274]: W0213 16:07:11.736017 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.736229 kubelet[3274]: E0213 16:07:11.736051 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.739421 kubelet[3274]: E0213 16:07:11.738023 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.739421 kubelet[3274]: W0213 16:07:11.738072 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.739421 kubelet[3274]: E0213 16:07:11.738138 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.739421 kubelet[3274]: E0213 16:07:11.738677 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.739421 kubelet[3274]: W0213 16:07:11.738708 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.739811 kubelet[3274]: E0213 16:07:11.739352 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.740809 kubelet[3274]: E0213 16:07:11.740070 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.740809 kubelet[3274]: W0213 16:07:11.740110 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.740809 kubelet[3274]: E0213 16:07:11.740144 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.742210 kubelet[3274]: E0213 16:07:11.741682 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.742210 kubelet[3274]: W0213 16:07:11.741725 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.742210 kubelet[3274]: E0213 16:07:11.741760 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.742460 kubelet[3274]: E0213 16:07:11.742236 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.742460 kubelet[3274]: W0213 16:07:11.742259 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.742460 kubelet[3274]: E0213 16:07:11.742288 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.745089 kubelet[3274]: E0213 16:07:11.744785 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.745089 kubelet[3274]: W0213 16:07:11.745043 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.747168 kubelet[3274]: E0213 16:07:11.745336 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.753697 kubelet[3274]: E0213 16:07:11.753622 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.753697 kubelet[3274]: W0213 16:07:11.753668 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.753919 kubelet[3274]: E0213 16:07:11.753720 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.753919 kubelet[3274]: I0213 16:07:11.753769 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ecaa1f9-7c73-451d-b745-a3b442214800-registration-dir\") pod \"csi-node-driver-gphn8\" (UID: \"8ecaa1f9-7c73-451d-b745-a3b442214800\") " pod="calico-system/csi-node-driver-gphn8" Feb 13 16:07:11.756020 kubelet[3274]: E0213 16:07:11.755615 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.756020 kubelet[3274]: W0213 16:07:11.755663 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.756020 kubelet[3274]: E0213 16:07:11.755701 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.761365 kubelet[3274]: E0213 16:07:11.761099 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.763332 kubelet[3274]: W0213 16:07:11.761615 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.763332 kubelet[3274]: E0213 16:07:11.761692 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.764075 kubelet[3274]: E0213 16:07:11.763749 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.764075 kubelet[3274]: W0213 16:07:11.763785 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.764075 kubelet[3274]: E0213 16:07:11.763819 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.764075 kubelet[3274]: I0213 16:07:11.763865 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ecaa1f9-7c73-451d-b745-a3b442214800-varrun\") pod \"csi-node-driver-gphn8\" (UID: \"8ecaa1f9-7c73-451d-b745-a3b442214800\") " pod="calico-system/csi-node-driver-gphn8" Feb 13 16:07:11.765573 kubelet[3274]: E0213 16:07:11.764901 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.765573 kubelet[3274]: W0213 16:07:11.764942 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.765573 kubelet[3274]: E0213 16:07:11.764991 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.767849 kubelet[3274]: E0213 16:07:11.767813 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.768019 kubelet[3274]: W0213 16:07:11.767989 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.770921 kubelet[3274]: E0213 16:07:11.768881 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.770921 kubelet[3274]: I0213 16:07:11.770880 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ecaa1f9-7c73-451d-b745-a3b442214800-socket-dir\") pod \"csi-node-driver-gphn8\" (UID: \"8ecaa1f9-7c73-451d-b745-a3b442214800\") " pod="calico-system/csi-node-driver-gphn8" Feb 13 16:07:11.771624 kubelet[3274]: E0213 16:07:11.770085 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.771624 kubelet[3274]: W0213 16:07:11.770965 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.772808 kubelet[3274]: E0213 16:07:11.772741 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.772808 kubelet[3274]: W0213 16:07:11.772781 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.774117 kubelet[3274]: E0213 16:07:11.773814 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.774117 kubelet[3274]: E0213 16:07:11.773887 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.774723 kubelet[3274]: E0213 16:07:11.774655 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.774723 kubelet[3274]: W0213 16:07:11.774702 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.775198 kubelet[3274]: E0213 16:07:11.774786 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.775198 kubelet[3274]: I0213 16:07:11.774850 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24tt\" (UniqueName: \"kubernetes.io/projected/8ecaa1f9-7c73-451d-b745-a3b442214800-kube-api-access-x24tt\") pod \"csi-node-driver-gphn8\" (UID: \"8ecaa1f9-7c73-451d-b745-a3b442214800\") " pod="calico-system/csi-node-driver-gphn8" Feb 13 16:07:11.776185 kubelet[3274]: E0213 16:07:11.776031 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.776185 kubelet[3274]: W0213 16:07:11.776068 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.777520 kubelet[3274]: E0213 16:07:11.776784 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.777520 kubelet[3274]: W0213 16:07:11.776823 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.777520 kubelet[3274]: E0213 16:07:11.776862 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.777520 kubelet[3274]: E0213 16:07:11.776902 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.778604 kubelet[3274]: E0213 16:07:11.778277 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.778604 kubelet[3274]: W0213 16:07:11.778309 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.778604 kubelet[3274]: E0213 16:07:11.778351 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.780193 kubelet[3274]: E0213 16:07:11.779985 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.780193 kubelet[3274]: W0213 16:07:11.780021 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.780193 kubelet[3274]: E0213 16:07:11.780054 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.781096 kubelet[3274]: E0213 16:07:11.780834 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.781096 kubelet[3274]: W0213 16:07:11.780880 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.781096 kubelet[3274]: E0213 16:07:11.780912 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.782376 kubelet[3274]: E0213 16:07:11.781852 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.782376 kubelet[3274]: W0213 16:07:11.781893 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.782376 kubelet[3274]: E0213 16:07:11.781932 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.783819 kubelet[3274]: E0213 16:07:11.783435 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.783819 kubelet[3274]: W0213 16:07:11.783527 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.783819 kubelet[3274]: E0213 16:07:11.783577 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.784931 kubelet[3274]: E0213 16:07:11.784736 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.784931 kubelet[3274]: W0213 16:07:11.784770 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.784931 kubelet[3274]: E0213 16:07:11.784804 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.787508 kubelet[3274]: E0213 16:07:11.786273 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.787508 kubelet[3274]: W0213 16:07:11.786317 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.787508 kubelet[3274]: E0213 16:07:11.786356 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.787508 kubelet[3274]: I0213 16:07:11.786437 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ecaa1f9-7c73-451d-b745-a3b442214800-kubelet-dir\") pod \"csi-node-driver-gphn8\" (UID: \"8ecaa1f9-7c73-451d-b745-a3b442214800\") " pod="calico-system/csi-node-driver-gphn8" Feb 13 16:07:11.788735 kubelet[3274]: E0213 16:07:11.788551 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.788735 kubelet[3274]: W0213 16:07:11.788595 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.788735 kubelet[3274]: E0213 16:07:11.788631 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.789585 kubelet[3274]: E0213 16:07:11.789425 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.789585 kubelet[3274]: W0213 16:07:11.789500 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.789585 kubelet[3274]: E0213 16:07:11.789534 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.888526 kubelet[3274]: E0213 16:07:11.888049 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.889035 kubelet[3274]: W0213 16:07:11.888114 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.889035 kubelet[3274]: E0213 16:07:11.888642 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.891602 kubelet[3274]: E0213 16:07:11.891492 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.892058 kubelet[3274]: W0213 16:07:11.891835 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.892058 kubelet[3274]: E0213 16:07:11.891963 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.893277 kubelet[3274]: E0213 16:07:11.892912 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.893277 kubelet[3274]: W0213 16:07:11.892961 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.893277 kubelet[3274]: E0213 16:07:11.893007 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.896172 kubelet[3274]: E0213 16:07:11.895659 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.896172 kubelet[3274]: W0213 16:07:11.895696 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.896172 kubelet[3274]: E0213 16:07:11.895744 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.897597 kubelet[3274]: E0213 16:07:11.897549 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.897597 kubelet[3274]: W0213 16:07:11.897588 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.897858 kubelet[3274]: E0213 16:07:11.897812 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.898161 kubelet[3274]: E0213 16:07:11.898120 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.898161 kubelet[3274]: W0213 16:07:11.898152 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.898920 kubelet[3274]: E0213 16:07:11.898511 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.898920 kubelet[3274]: E0213 16:07:11.898552 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.898920 kubelet[3274]: W0213 16:07:11.898574 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.898920 kubelet[3274]: E0213 16:07:11.898692 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.899834 kubelet[3274]: E0213 16:07:11.899280 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.899834 kubelet[3274]: W0213 16:07:11.899322 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.899834 kubelet[3274]: E0213 16:07:11.899409 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.899834 kubelet[3274]: E0213 16:07:11.899948 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.899834 kubelet[3274]: W0213 16:07:11.899972 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.899834 kubelet[3274]: E0213 16:07:11.900020 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.899834 kubelet[3274]: E0213 16:07:11.900409 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.900932 kubelet[3274]: W0213 16:07:11.900429 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.900932 kubelet[3274]: E0213 16:07:11.900659 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.902200 kubelet[3274]: E0213 16:07:11.900944 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.902200 kubelet[3274]: W0213 16:07:11.900972 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.902200 kubelet[3274]: E0213 16:07:11.901024 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.902200 kubelet[3274]: E0213 16:07:11.901897 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.902200 kubelet[3274]: W0213 16:07:11.901929 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.902200 kubelet[3274]: E0213 16:07:11.901982 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.903756 kubelet[3274]: E0213 16:07:11.902458 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.903756 kubelet[3274]: W0213 16:07:11.902582 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.903756 kubelet[3274]: E0213 16:07:11.902623 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.903756 kubelet[3274]: E0213 16:07:11.903270 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.903756 kubelet[3274]: W0213 16:07:11.903313 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.903756 kubelet[3274]: E0213 16:07:11.903358 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.904964 kubelet[3274]: E0213 16:07:11.904053 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.904964 kubelet[3274]: W0213 16:07:11.904079 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.904964 kubelet[3274]: E0213 16:07:11.904129 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.904964 kubelet[3274]: E0213 16:07:11.904609 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.904964 kubelet[3274]: W0213 16:07:11.904638 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.904964 kubelet[3274]: E0213 16:07:11.904679 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.906023 kubelet[3274]: E0213 16:07:11.905961 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.906023 kubelet[3274]: W0213 16:07:11.906018 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.906278 kubelet[3274]: E0213 16:07:11.906102 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.907056 kubelet[3274]: E0213 16:07:11.906690 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.907056 kubelet[3274]: W0213 16:07:11.906728 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.907056 kubelet[3274]: E0213 16:07:11.906793 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.908664 kubelet[3274]: E0213 16:07:11.907129 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.908664 kubelet[3274]: W0213 16:07:11.907150 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.908664 kubelet[3274]: E0213 16:07:11.907617 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.908664 kubelet[3274]: W0213 16:07:11.907650 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.908664 kubelet[3274]: E0213 16:07:11.907694 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.908664 kubelet[3274]: E0213 16:07:11.907788 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.908664 kubelet[3274]: E0213 16:07:11.908245 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.908664 kubelet[3274]: W0213 16:07:11.908273 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.908664 kubelet[3274]: E0213 16:07:11.908303 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.909112 kubelet[3274]: E0213 16:07:11.908719 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.909112 kubelet[3274]: W0213 16:07:11.908741 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.909112 kubelet[3274]: E0213 16:07:11.908766 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.909112 kubelet[3274]: E0213 16:07:11.909089 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.909112 kubelet[3274]: W0213 16:07:11.909105 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.909351 kubelet[3274]: E0213 16:07:11.909124 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.911073 kubelet[3274]: E0213 16:07:11.909951 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.911073 kubelet[3274]: W0213 16:07:11.910010 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.911073 kubelet[3274]: E0213 16:07:11.910052 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.911073 kubelet[3274]: E0213 16:07:11.910679 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.911073 kubelet[3274]: W0213 16:07:11.910709 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.911073 kubelet[3274]: E0213 16:07:11.910742 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.912308 kubelet[3274]: E0213 16:07:11.911184 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.912308 kubelet[3274]: W0213 16:07:11.911209 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.912308 kubelet[3274]: E0213 16:07:11.911435 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.912308 kubelet[3274]: E0213 16:07:11.911813 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.912308 kubelet[3274]: W0213 16:07:11.911845 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.912308 kubelet[3274]: E0213 16:07:11.911908 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.914125 kubelet[3274]: E0213 16:07:11.913708 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.914125 kubelet[3274]: W0213 16:07:11.913754 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.914125 kubelet[3274]: E0213 16:07:11.913789 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.915617 kubelet[3274]: E0213 16:07:11.914391 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.915617 kubelet[3274]: W0213 16:07:11.914426 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.915617 kubelet[3274]: E0213 16:07:11.914554 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:11.915617 kubelet[3274]: E0213 16:07:11.915145 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:11.915617 kubelet[3274]: W0213 16:07:11.915173 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:11.915617 kubelet[3274]: E0213 16:07:11.915209 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.013125 kubelet[3274]: E0213 16:07:12.013065 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.013125 kubelet[3274]: W0213 16:07:12.013110 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.013355 kubelet[3274]: E0213 16:07:12.013147 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.015532 kubelet[3274]: E0213 16:07:12.015460 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.015532 kubelet[3274]: W0213 16:07:12.015521 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.015532 kubelet[3274]: E0213 16:07:12.015557 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.016278 kubelet[3274]: E0213 16:07:12.016234 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.016278 kubelet[3274]: W0213 16:07:12.016272 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.016407 kubelet[3274]: E0213 16:07:12.016307 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.016963 kubelet[3274]: E0213 16:07:12.016913 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.016963 kubelet[3274]: W0213 16:07:12.016951 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.017155 kubelet[3274]: E0213 16:07:12.016986 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.017887 kubelet[3274]: E0213 16:07:12.017843 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.017887 kubelet[3274]: W0213 16:07:12.017882 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.018036 kubelet[3274]: E0213 16:07:12.017925 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.018988 kubelet[3274]: E0213 16:07:12.018935 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.018988 kubelet[3274]: W0213 16:07:12.018974 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.019168 kubelet[3274]: E0213 16:07:12.019011 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.120158 kubelet[3274]: E0213 16:07:12.119755 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.120158 kubelet[3274]: W0213 16:07:12.119803 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.120158 kubelet[3274]: E0213 16:07:12.119850 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.120686 kubelet[3274]: E0213 16:07:12.120645 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.120809 kubelet[3274]: W0213 16:07:12.120782 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.120971 kubelet[3274]: E0213 16:07:12.120944 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.121822 kubelet[3274]: E0213 16:07:12.121785 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.122209 kubelet[3274]: W0213 16:07:12.121969 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.122209 kubelet[3274]: E0213 16:07:12.122011 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.123134 kubelet[3274]: E0213 16:07:12.123097 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.123491 kubelet[3274]: W0213 16:07:12.123271 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.123491 kubelet[3274]: E0213 16:07:12.123314 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.124456 kubelet[3274]: E0213 16:07:12.124157 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.124456 kubelet[3274]: W0213 16:07:12.124193 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.124456 kubelet[3274]: E0213 16:07:12.124226 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.125714 kubelet[3274]: E0213 16:07:12.125649 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.125714 kubelet[3274]: W0213 16:07:12.125695 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.125970 kubelet[3274]: E0213 16:07:12.125734 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.155623 kubelet[3274]: E0213 16:07:12.153586 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.155623 kubelet[3274]: W0213 16:07:12.153638 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.155623 kubelet[3274]: E0213 16:07:12.153679 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.159854 kubelet[3274]: E0213 16:07:12.159799 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.159854 kubelet[3274]: W0213 16:07:12.159839 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.160048 kubelet[3274]: E0213 16:07:12.159874 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.180838 kubelet[3274]: E0213 16:07:12.175327 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.180838 kubelet[3274]: W0213 16:07:12.177500 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.180838 kubelet[3274]: E0213 16:07:12.177597 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.229740 kubelet[3274]: E0213 16:07:12.228669 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.229740 kubelet[3274]: W0213 16:07:12.228733 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.229740 kubelet[3274]: E0213 16:07:12.228784 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.231886 kubelet[3274]: E0213 16:07:12.231835 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.232430 kubelet[3274]: W0213 16:07:12.232104 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.232430 kubelet[3274]: E0213 16:07:12.232193 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.234099 kubelet[3274]: E0213 16:07:12.233866 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.234099 kubelet[3274]: W0213 16:07:12.233913 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.234099 kubelet[3274]: E0213 16:07:12.233958 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.336242 kubelet[3274]: E0213 16:07:12.336108 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.336242 kubelet[3274]: W0213 16:07:12.336153 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.336242 kubelet[3274]: E0213 16:07:12.336194 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.338578 kubelet[3274]: E0213 16:07:12.336784 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.338578 kubelet[3274]: W0213 16:07:12.336813 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.338578 kubelet[3274]: E0213 16:07:12.336845 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.338578 kubelet[3274]: E0213 16:07:12.337929 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.338578 kubelet[3274]: W0213 16:07:12.337969 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.338578 kubelet[3274]: E0213 16:07:12.338016 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.435303 kubelet[3274]: E0213 16:07:12.435096 3274 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 13 16:07:12.435303 kubelet[3274]: E0213 16:07:12.435263 3274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31c540e0-bcf4-4991-8cc6-cd2997604431-tigera-ca-bundle podName:31c540e0-bcf4-4991-8cc6-cd2997604431 nodeName:}" failed. No retries permitted until 2025-02-13 16:07:12.93522042 +0000 UTC m=+26.867790812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/31c540e0-bcf4-4991-8cc6-cd2997604431-tigera-ca-bundle") pod "calico-typha-58bc8f4d67-5v7cc" (UID: "31c540e0-bcf4-4991-8cc6-cd2997604431") : failed to sync configmap cache: timed out waiting for the condition Feb 13 16:07:12.437103 kubelet[3274]: E0213 16:07:12.435110 3274 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 16:07:12.437103 kubelet[3274]: E0213 16:07:12.435815 3274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31c540e0-bcf4-4991-8cc6-cd2997604431-typha-certs podName:31c540e0-bcf4-4991-8cc6-cd2997604431 nodeName:}" failed. No retries permitted until 2025-02-13 16:07:12.935786664 +0000 UTC m=+26.868357056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/31c540e0-bcf4-4991-8cc6-cd2997604431-typha-certs") pod "calico-typha-58bc8f4d67-5v7cc" (UID: "31c540e0-bcf4-4991-8cc6-cd2997604431") : failed to sync secret cache: timed out waiting for the condition Feb 13 16:07:12.439581 kubelet[3274]: E0213 16:07:12.439220 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.439581 kubelet[3274]: W0213 16:07:12.439257 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.439581 kubelet[3274]: E0213 16:07:12.439287 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.439886 kubelet[3274]: E0213 16:07:12.439787 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.439886 kubelet[3274]: W0213 16:07:12.439812 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.439886 kubelet[3274]: E0213 16:07:12.439843 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.441067 kubelet[3274]: E0213 16:07:12.441011 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.441067 kubelet[3274]: W0213 16:07:12.441050 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.441302 kubelet[3274]: E0213 16:07:12.441082 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.543029 kubelet[3274]: E0213 16:07:12.542759 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.543029 kubelet[3274]: W0213 16:07:12.542809 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.543029 kubelet[3274]: E0213 16:07:12.542852 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.544093 kubelet[3274]: E0213 16:07:12.543802 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.544093 kubelet[3274]: W0213 16:07:12.543834 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.544093 kubelet[3274]: E0213 16:07:12.543866 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.545889 kubelet[3274]: E0213 16:07:12.545740 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.545889 kubelet[3274]: W0213 16:07:12.545774 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.545889 kubelet[3274]: E0213 16:07:12.545806 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.638440 kubelet[3274]: E0213 16:07:12.638018 3274 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 13 16:07:12.638440 kubelet[3274]: E0213 16:07:12.638124 3274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ffd808bf-1f3a-457d-a86e-3118bee8ab30-tigera-ca-bundle podName:ffd808bf-1f3a-457d-a86e-3118bee8ab30 nodeName:}" failed. No retries permitted until 2025-02-13 16:07:13.138098541 +0000 UTC m=+27.070668933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/ffd808bf-1f3a-457d-a86e-3118bee8ab30-tigera-ca-bundle") pod "calico-node-mlcn4" (UID: "ffd808bf-1f3a-457d-a86e-3118bee8ab30") : failed to sync configmap cache: timed out waiting for the condition Feb 13 16:07:12.647325 kubelet[3274]: E0213 16:07:12.647282 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.647325 kubelet[3274]: W0213 16:07:12.647319 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.647618 kubelet[3274]: E0213 16:07:12.647351 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.647785 kubelet[3274]: E0213 16:07:12.647757 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.647872 kubelet[3274]: W0213 16:07:12.647785 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.647872 kubelet[3274]: E0213 16:07:12.647815 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.648159 kubelet[3274]: E0213 16:07:12.648132 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.648229 kubelet[3274]: W0213 16:07:12.648159 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.648229 kubelet[3274]: E0213 16:07:12.648181 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.749319 kubelet[3274]: E0213 16:07:12.749191 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.749955 kubelet[3274]: W0213 16:07:12.749532 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.749955 kubelet[3274]: E0213 16:07:12.749572 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.751382 kubelet[3274]: E0213 16:07:12.751116 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.751382 kubelet[3274]: W0213 16:07:12.751145 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.751382 kubelet[3274]: E0213 16:07:12.751174 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.751884 kubelet[3274]: E0213 16:07:12.751771 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.751884 kubelet[3274]: W0213 16:07:12.751796 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.751884 kubelet[3274]: E0213 16:07:12.751821 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.855933 kubelet[3274]: E0213 16:07:12.855816 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.855933 kubelet[3274]: W0213 16:07:12.855896 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.855933 kubelet[3274]: E0213 16:07:12.855958 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.856897 kubelet[3274]: E0213 16:07:12.856523 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.856897 kubelet[3274]: W0213 16:07:12.856559 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.856897 kubelet[3274]: E0213 16:07:12.856590 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.857134 kubelet[3274]: E0213 16:07:12.857087 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.857197 kubelet[3274]: W0213 16:07:12.857133 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.857197 kubelet[3274]: E0213 16:07:12.857163 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.958266 kubelet[3274]: E0213 16:07:12.957994 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.958266 kubelet[3274]: W0213 16:07:12.958030 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.958266 kubelet[3274]: E0213 16:07:12.958067 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.959033 kubelet[3274]: E0213 16:07:12.958962 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.959033 kubelet[3274]: W0213 16:07:12.958992 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.959033 kubelet[3274]: E0213 16:07:12.959023 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.960527 kubelet[3274]: E0213 16:07:12.960432 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.960527 kubelet[3274]: W0213 16:07:12.960517 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.961235 kubelet[3274]: E0213 16:07:12.961020 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.961785 kubelet[3274]: E0213 16:07:12.961732 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.961899 kubelet[3274]: W0213 16:07:12.961794 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.961899 kubelet[3274]: E0213 16:07:12.961837 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.962672 kubelet[3274]: E0213 16:07:12.962616 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.962829 kubelet[3274]: W0213 16:07:12.962656 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.963212 kubelet[3274]: E0213 16:07:12.963144 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.963850 kubelet[3274]: E0213 16:07:12.963798 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.963850 kubelet[3274]: W0213 16:07:12.963836 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.964218 kubelet[3274]: E0213 16:07:12.963906 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.964693 kubelet[3274]: E0213 16:07:12.964615 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.964693 kubelet[3274]: W0213 16:07:12.964675 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.964875 kubelet[3274]: E0213 16:07:12.964747 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.964875 kubelet[3274]: E0213 16:07:12.965221 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.964875 kubelet[3274]: W0213 16:07:12.965245 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.964875 kubelet[3274]: E0213 16:07:12.965279 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.965951 kubelet[3274]: E0213 16:07:12.965867 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.965951 kubelet[3274]: W0213 16:07:12.965922 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.966302 kubelet[3274]: E0213 16:07:12.966150 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.966937 kubelet[3274]: E0213 16:07:12.966374 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.966937 kubelet[3274]: W0213 16:07:12.966430 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.966937 kubelet[3274]: E0213 16:07:12.966552 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.968376 kubelet[3274]: E0213 16:07:12.967742 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.968376 kubelet[3274]: W0213 16:07:12.967779 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.968376 kubelet[3274]: E0213 16:07:12.967817 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.971817 kubelet[3274]: E0213 16:07:12.971138 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.971817 kubelet[3274]: W0213 16:07:12.971179 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.971817 kubelet[3274]: E0213 16:07:12.971246 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:12.981820 kubelet[3274]: E0213 16:07:12.981765 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:12.981968 kubelet[3274]: W0213 16:07:12.981807 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:12.981968 kubelet[3274]: E0213 16:07:12.981866 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.062786 kubelet[3274]: E0213 16:07:13.062705 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:13.062786 kubelet[3274]: W0213 16:07:13.062763 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:13.063097 kubelet[3274]: E0213 16:07:13.062815 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.137713 containerd[1932]: time="2025-02-13T16:07:13.137512539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58bc8f4d67-5v7cc,Uid:31c540e0-bcf4-4991-8cc6-cd2997604431,Namespace:calico-system,Attempt:0,}" Feb 13 16:07:13.172667 kubelet[3274]: E0213 16:07:13.169897 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:13.172667 kubelet[3274]: W0213 16:07:13.170215 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:13.172667 kubelet[3274]: E0213 16:07:13.170505 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.174217 kubelet[3274]: E0213 16:07:13.173905 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:13.174217 kubelet[3274]: W0213 16:07:13.173949 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:13.174217 kubelet[3274]: E0213 16:07:13.173990 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.179113 kubelet[3274]: E0213 16:07:13.177203 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:13.179113 kubelet[3274]: W0213 16:07:13.177258 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:13.179113 kubelet[3274]: E0213 16:07:13.177301 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.179113 kubelet[3274]: E0213 16:07:13.178769 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:13.179113 kubelet[3274]: W0213 16:07:13.178801 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:13.179113 kubelet[3274]: E0213 16:07:13.178834 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.180033 kubelet[3274]: E0213 16:07:13.179502 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:13.180033 kubelet[3274]: W0213 16:07:13.179540 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:13.180033 kubelet[3274]: E0213 16:07:13.179581 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.184646 kubelet[3274]: E0213 16:07:13.183395 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:13.184646 kubelet[3274]: W0213 16:07:13.183436 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:13.184646 kubelet[3274]: E0213 16:07:13.183537 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:13.211113 containerd[1932]: time="2025-02-13T16:07:13.210382060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:13.213240 containerd[1932]: time="2025-02-13T16:07:13.212868328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:13.213240 containerd[1932]: time="2025-02-13T16:07:13.212933188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:13.213240 containerd[1932]: time="2025-02-13T16:07:13.213159424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:13.269373 systemd[1]: Started cri-containerd-05d47497a4a19ca943d905a067583765c2735f36caef778be2f8498ba5bffb6e.scope - libcontainer container 05d47497a4a19ca943d905a067583765c2735f36caef778be2f8498ba5bffb6e. Feb 13 16:07:13.342103 containerd[1932]: time="2025-02-13T16:07:13.341097100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mlcn4,Uid:ffd808bf-1f3a-457d-a86e-3118bee8ab30,Namespace:calico-system,Attempt:0,}" Feb 13 16:07:13.354169 containerd[1932]: time="2025-02-13T16:07:13.354093868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58bc8f4d67-5v7cc,Uid:31c540e0-bcf4-4991-8cc6-cd2997604431,Namespace:calico-system,Attempt:0,} returns sandbox id \"05d47497a4a19ca943d905a067583765c2735f36caef778be2f8498ba5bffb6e\"" Feb 13 16:07:13.359115 containerd[1932]: time="2025-02-13T16:07:13.358872016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 16:07:13.418003 kubelet[3274]: E0213 16:07:13.417143 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:13.421916 containerd[1932]: time="2025-02-13T16:07:13.421674941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:13.423507 containerd[1932]: time="2025-02-13T16:07:13.421830317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:13.426935 containerd[1932]: time="2025-02-13T16:07:13.426572237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:13.426935 containerd[1932]: time="2025-02-13T16:07:13.426852533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:13.478108 systemd[1]: Started cri-containerd-45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520.scope - libcontainer container 45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520. Feb 13 16:07:13.529030 containerd[1932]: time="2025-02-13T16:07:13.528962009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mlcn4,Uid:ffd808bf-1f3a-457d-a86e-3118bee8ab30,Namespace:calico-system,Attempt:0,} returns sandbox id \"45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520\"" Feb 13 16:07:14.973532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646924112.mount: Deactivated successfully. Feb 13 16:07:15.417851 kubelet[3274]: E0213 16:07:15.417406 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:15.631423 containerd[1932]: time="2025-02-13T16:07:15.631204100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:15.634067 containerd[1932]: time="2025-02-13T16:07:15.633877580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 16:07:15.637013 containerd[1932]: time="2025-02-13T16:07:15.636875876Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:15.645660 containerd[1932]: time="2025-02-13T16:07:15.645586964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:15.649584 containerd[1932]: time="2025-02-13T16:07:15.647867516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.288107092s" Feb 13 16:07:15.649584 containerd[1932]: time="2025-02-13T16:07:15.647945600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 16:07:15.652857 containerd[1932]: time="2025-02-13T16:07:15.652510244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 16:07:15.695570 containerd[1932]: time="2025-02-13T16:07:15.695345060Z" level=info msg="CreateContainer within sandbox \"05d47497a4a19ca943d905a067583765c2735f36caef778be2f8498ba5bffb6e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 16:07:15.744772 containerd[1932]: time="2025-02-13T16:07:15.744574340Z" level=info msg="CreateContainer within sandbox \"05d47497a4a19ca943d905a067583765c2735f36caef778be2f8498ba5bffb6e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"89679837edd38e6d13d00f69a02218fc8eca9c5ee070ae5c947cfbcb96e91e92\"" Feb 13 16:07:15.746599 containerd[1932]: time="2025-02-13T16:07:15.746522204Z" level=info msg="StartContainer for \"89679837edd38e6d13d00f69a02218fc8eca9c5ee070ae5c947cfbcb96e91e92\"" Feb 13 16:07:15.816832 systemd[1]: Started cri-containerd-89679837edd38e6d13d00f69a02218fc8eca9c5ee070ae5c947cfbcb96e91e92.scope - libcontainer container 89679837edd38e6d13d00f69a02218fc8eca9c5ee070ae5c947cfbcb96e91e92. Feb 13 16:07:15.894261 containerd[1932]: time="2025-02-13T16:07:15.894181017Z" level=info msg="StartContainer for \"89679837edd38e6d13d00f69a02218fc8eca9c5ee070ae5c947cfbcb96e91e92\" returns successfully" Feb 13 16:07:16.633619 kubelet[3274]: I0213 16:07:16.633494 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58bc8f4d67-5v7cc" podStartSLOduration=3.339643253 podStartE2EDuration="5.633184137s" podCreationTimestamp="2025-02-13 16:07:11 +0000 UTC" firstStartedPulling="2025-02-13 16:07:13.357100756 +0000 UTC m=+27.289671148" lastFinishedPulling="2025-02-13 16:07:15.65064164 +0000 UTC m=+29.583212032" observedRunningTime="2025-02-13 16:07:16.632240157 +0000 UTC m=+30.564810645" watchObservedRunningTime="2025-02-13 16:07:16.633184137 +0000 UTC m=+30.565754529" Feb 13 16:07:16.693376 kubelet[3274]: E0213 16:07:16.693323 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.693376 kubelet[3274]: W0213 16:07:16.693363 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.693769 kubelet[3274]: E0213 16:07:16.693397 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.693865 kubelet[3274]: E0213 16:07:16.693784 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.693865 kubelet[3274]: W0213 16:07:16.693803 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.693865 kubelet[3274]: E0213 16:07:16.693826 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.694155 kubelet[3274]: E0213 16:07:16.694108 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.694155 kubelet[3274]: W0213 16:07:16.694125 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.694155 kubelet[3274]: E0213 16:07:16.694144 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.694490 kubelet[3274]: E0213 16:07:16.694434 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.694490 kubelet[3274]: W0213 16:07:16.694451 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.694723 kubelet[3274]: E0213 16:07:16.694487 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.694916 kubelet[3274]: E0213 16:07:16.694787 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.694916 kubelet[3274]: W0213 16:07:16.694803 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.694916 kubelet[3274]: E0213 16:07:16.694823 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.695134 kubelet[3274]: E0213 16:07:16.695102 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.695211 kubelet[3274]: W0213 16:07:16.695131 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.695211 kubelet[3274]: E0213 16:07:16.695154 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.695504 kubelet[3274]: E0213 16:07:16.695450 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.695504 kubelet[3274]: W0213 16:07:16.695496 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.695720 kubelet[3274]: E0213 16:07:16.695519 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.695847 kubelet[3274]: E0213 16:07:16.695822 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.695847 kubelet[3274]: W0213 16:07:16.695839 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.696143 kubelet[3274]: E0213 16:07:16.695859 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.696250 kubelet[3274]: E0213 16:07:16.696156 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.696250 kubelet[3274]: W0213 16:07:16.696172 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.696250 kubelet[3274]: E0213 16:07:16.696194 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.696645 kubelet[3274]: E0213 16:07:16.696495 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.696645 kubelet[3274]: W0213 16:07:16.696512 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.696645 kubelet[3274]: E0213 16:07:16.696531 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.696832 kubelet[3274]: E0213 16:07:16.696783 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.696832 kubelet[3274]: W0213 16:07:16.696800 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.696832 kubelet[3274]: E0213 16:07:16.696818 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.697112 kubelet[3274]: E0213 16:07:16.697085 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.697112 kubelet[3274]: W0213 16:07:16.697110 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.697112 kubelet[3274]: E0213 16:07:16.697132 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.697503 kubelet[3274]: E0213 16:07:16.697422 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.697503 kubelet[3274]: W0213 16:07:16.697459 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.697503 kubelet[3274]: E0213 16:07:16.697506 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.697823 kubelet[3274]: E0213 16:07:16.697794 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.697922 kubelet[3274]: W0213 16:07:16.697819 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.697922 kubelet[3274]: E0213 16:07:16.697841 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.698139 kubelet[3274]: E0213 16:07:16.698121 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.698139 kubelet[3274]: W0213 16:07:16.698136 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.698303 kubelet[3274]: E0213 16:07:16.698156 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.708756 kubelet[3274]: E0213 16:07:16.708652 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.708756 kubelet[3274]: W0213 16:07:16.708684 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.708756 kubelet[3274]: E0213 16:07:16.708728 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.709293 kubelet[3274]: E0213 16:07:16.709255 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.709398 kubelet[3274]: W0213 16:07:16.709288 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.709398 kubelet[3274]: E0213 16:07:16.709359 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.709954 kubelet[3274]: E0213 16:07:16.709925 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.710148 kubelet[3274]: W0213 16:07:16.710078 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.710340 kubelet[3274]: E0213 16:07:16.710129 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.710602 kubelet[3274]: E0213 16:07:16.710572 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.710768 kubelet[3274]: W0213 16:07:16.710602 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.710768 kubelet[3274]: E0213 16:07:16.710642 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.711045 kubelet[3274]: E0213 16:07:16.711018 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.711203 kubelet[3274]: W0213 16:07:16.711044 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.711203 kubelet[3274]: E0213 16:07:16.711119 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.711432 kubelet[3274]: E0213 16:07:16.711386 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.711432 kubelet[3274]: W0213 16:07:16.711411 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.711432 kubelet[3274]: E0213 16:07:16.711494 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.711887 kubelet[3274]: E0213 16:07:16.711724 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.711887 kubelet[3274]: W0213 16:07:16.711740 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.711887 kubelet[3274]: E0213 16:07:16.711787 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.712237 kubelet[3274]: E0213 16:07:16.712010 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.712237 kubelet[3274]: W0213 16:07:16.712025 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.712237 kubelet[3274]: E0213 16:07:16.712056 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.712439 kubelet[3274]: E0213 16:07:16.712406 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.712629 kubelet[3274]: W0213 16:07:16.712423 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.712629 kubelet[3274]: E0213 16:07:16.712517 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.713010 kubelet[3274]: E0213 16:07:16.712955 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.713608 kubelet[3274]: W0213 16:07:16.713010 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.713608 kubelet[3274]: E0213 16:07:16.713102 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.713608 kubelet[3274]: E0213 16:07:16.713459 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.713608 kubelet[3274]: W0213 16:07:16.713519 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.713864 kubelet[3274]: E0213 16:07:16.713812 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.713864 kubelet[3274]: W0213 16:07:16.713828 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.714063 kubelet[3274]: E0213 16:07:16.714008 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.714190 kubelet[3274]: E0213 16:07:16.714072 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.714389 kubelet[3274]: E0213 16:07:16.714084 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.714684 kubelet[3274]: W0213 16:07:16.714519 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.714919 kubelet[3274]: E0213 16:07:16.714564 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.715349 kubelet[3274]: E0213 16:07:16.715104 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.715349 kubelet[3274]: W0213 16:07:16.715139 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.715349 kubelet[3274]: E0213 16:07:16.715176 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.715988 kubelet[3274]: E0213 16:07:16.715904 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.715988 kubelet[3274]: W0213 16:07:16.715933 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.716337 kubelet[3274]: E0213 16:07:16.715962 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.717159 kubelet[3274]: E0213 16:07:16.717004 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.717159 kubelet[3274]: W0213 16:07:16.717028 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.717159 kubelet[3274]: E0213 16:07:16.717054 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.717732 kubelet[3274]: E0213 16:07:16.717610 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.717732 kubelet[3274]: W0213 16:07:16.717634 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.717732 kubelet[3274]: E0213 16:07:16.717658 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:16.719061 kubelet[3274]: E0213 16:07:16.718939 3274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:07:16.719061 kubelet[3274]: W0213 16:07:16.718972 3274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:07:16.719061 kubelet[3274]: E0213 16:07:16.719002 3274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:07:17.280520 containerd[1932]: time="2025-02-13T16:07:17.279678848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:17.282525 containerd[1932]: time="2025-02-13T16:07:17.282353516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 16:07:17.285042 containerd[1932]: time="2025-02-13T16:07:17.284930972Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:17.290798 containerd[1932]: time="2025-02-13T16:07:17.290628392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:17.293584 containerd[1932]: time="2025-02-13T16:07:17.292573028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.639969592s" Feb 13 16:07:17.293584 containerd[1932]: time="2025-02-13T16:07:17.292679960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 16:07:17.300541 containerd[1932]: time="2025-02-13T16:07:17.300399344Z" level=info msg="CreateContainer within sandbox \"45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 16:07:17.344716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032200918.mount: Deactivated successfully. Feb 13 16:07:17.347764 containerd[1932]: time="2025-02-13T16:07:17.347554232Z" level=info msg="CreateContainer within sandbox \"45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b\"" Feb 13 16:07:17.349746 containerd[1932]: time="2025-02-13T16:07:17.349238492Z" level=info msg="StartContainer for \"50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b\"" Feb 13 16:07:17.417815 kubelet[3274]: E0213 16:07:17.417151 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:17.424279 systemd[1]: Started cri-containerd-50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b.scope - libcontainer container 50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b. Feb 13 16:07:17.491751 containerd[1932]: time="2025-02-13T16:07:17.491643093Z" level=info msg="StartContainer for \"50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b\" returns successfully" Feb 13 16:07:17.533458 systemd[1]: cri-containerd-50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b.scope: Deactivated successfully. Feb 13 16:07:17.618220 kubelet[3274]: I0213 16:07:17.616545 3274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:07:17.668649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b-rootfs.mount: Deactivated successfully. Feb 13 16:07:17.774642 containerd[1932]: time="2025-02-13T16:07:17.774518194Z" level=info msg="shim disconnected" id=50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b namespace=k8s.io Feb 13 16:07:17.774642 containerd[1932]: time="2025-02-13T16:07:17.774613450Z" level=warning msg="cleaning up after shim disconnected" id=50452ec3459f54fe6d7b8776cbc31cf674a0055a2344914058f41ea54e8f735b namespace=k8s.io Feb 13 16:07:17.774642 containerd[1932]: time="2025-02-13T16:07:17.774635674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:18.628119 containerd[1932]: time="2025-02-13T16:07:18.627868823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 16:07:19.416892 kubelet[3274]: E0213 16:07:19.416801 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:21.417166 kubelet[3274]: E0213 16:07:21.417092 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:22.464525 containerd[1932]: time="2025-02-13T16:07:22.464359946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:22.466015 containerd[1932]: time="2025-02-13T16:07:22.465947954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 16:07:22.467256 containerd[1932]: time="2025-02-13T16:07:22.467161142Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:22.471671 containerd[1932]: time="2025-02-13T16:07:22.471559934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:22.473520 containerd[1932]: time="2025-02-13T16:07:22.473201318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.845234587s" Feb 13 16:07:22.473520 containerd[1932]: time="2025-02-13T16:07:22.473266238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 16:07:22.484791 containerd[1932]: time="2025-02-13T16:07:22.484229702Z" level=info msg="CreateContainer within sandbox \"45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 16:07:22.510939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070826142.mount: Deactivated successfully. Feb 13 16:07:22.519213 containerd[1932]: time="2025-02-13T16:07:22.519089090Z" level=info msg="CreateContainer within sandbox \"45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d\"" Feb 13 16:07:22.522335 containerd[1932]: time="2025-02-13T16:07:22.520208366Z" level=info msg="StartContainer for \"e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d\"" Feb 13 16:07:22.580392 systemd[1]: run-containerd-runc-k8s.io-e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d-runc.Gl34Pq.mount: Deactivated successfully. Feb 13 16:07:22.596872 systemd[1]: Started cri-containerd-e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d.scope - libcontainer container e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d. Feb 13 16:07:22.661257 containerd[1932]: time="2025-02-13T16:07:22.661160631Z" level=info msg="StartContainer for \"e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d\" returns successfully" Feb 13 16:07:23.340708 kubelet[3274]: I0213 16:07:23.340641 3274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:07:23.417687 kubelet[3274]: E0213 16:07:23.416951 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:23.822170 containerd[1932]: time="2025-02-13T16:07:23.822088924Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:07:23.827426 systemd[1]: cri-containerd-e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d.scope: Deactivated successfully. Feb 13 16:07:23.871351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d-rootfs.mount: Deactivated successfully. Feb 13 16:07:23.896524 kubelet[3274]: I0213 16:07:23.894431 3274 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:07:23.954347 kubelet[3274]: I0213 16:07:23.952250 3274 topology_manager.go:215] "Topology Admit Handler" podUID="01de8ba8-2672-4a12-9409-8ec82372a335" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dg565" Feb 13 16:07:23.963374 kubelet[3274]: I0213 16:07:23.963307 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnsbh\" (UniqueName: \"kubernetes.io/projected/01de8ba8-2672-4a12-9409-8ec82372a335-kube-api-access-xnsbh\") pod \"coredns-7db6d8ff4d-dg565\" (UID: \"01de8ba8-2672-4a12-9409-8ec82372a335\") " pod="kube-system/coredns-7db6d8ff4d-dg565" Feb 13 16:07:23.963595 kubelet[3274]: I0213 16:07:23.963395 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01de8ba8-2672-4a12-9409-8ec82372a335-config-volume\") pod \"coredns-7db6d8ff4d-dg565\" (UID: \"01de8ba8-2672-4a12-9409-8ec82372a335\") " pod="kube-system/coredns-7db6d8ff4d-dg565" Feb 13 16:07:23.972507 kubelet[3274]: I0213 16:07:23.971262 3274 topology_manager.go:215] "Topology Admit Handler" podUID="adce654e-c14f-4b9f-9441-d91c1f7ad783" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rc9tv" Feb 13 16:07:23.972690 kubelet[3274]: W0213 16:07:23.972621 3274 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:23.972690 kubelet[3274]: E0213 16:07:23.972671 3274 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-147" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-147' and this object Feb 13 16:07:23.987749 systemd[1]: Created slice kubepods-burstable-pod01de8ba8_2672_4a12_9409_8ec82372a335.slice - libcontainer container kubepods-burstable-pod01de8ba8_2672_4a12_9409_8ec82372a335.slice. Feb 13 16:07:24.006219 kubelet[3274]: I0213 16:07:24.003534 3274 topology_manager.go:215] "Topology Admit Handler" podUID="e16e8353-7776-4b1a-a927-2ad9f0aeb669" podNamespace="calico-apiserver" podName="calico-apiserver-6c6548768f-2jq2g" Feb 13 16:07:24.008120 kubelet[3274]: I0213 16:07:24.006662 3274 topology_manager.go:215] "Topology Admit Handler" podUID="7ab6612a-9cb7-4dd0-8057-abec92b485a5" podNamespace="calico-system" podName="calico-kube-controllers-59f7b46b7f-wxpjk" Feb 13 16:07:24.009664 kubelet[3274]: I0213 16:07:24.009614 3274 topology_manager.go:215] "Topology Admit Handler" podUID="3ffc1870-253d-4caf-9e31-d298393f248f" podNamespace="calico-apiserver" podName="calico-apiserver-6c6548768f-mmqrp" Feb 13 16:07:24.011057 systemd[1]: Created slice kubepods-burstable-podadce654e_c14f_4b9f_9441_d91c1f7ad783.slice - libcontainer container kubepods-burstable-podadce654e_c14f_4b9f_9441_d91c1f7ad783.slice. Feb 13 16:07:24.035560 systemd[1]: Created slice kubepods-besteffort-pode16e8353_7776_4b1a_a927_2ad9f0aeb669.slice - libcontainer container kubepods-besteffort-pode16e8353_7776_4b1a_a927_2ad9f0aeb669.slice. Feb 13 16:07:24.066999 kubelet[3274]: I0213 16:07:24.063770 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e16e8353-7776-4b1a-a927-2ad9f0aeb669-calico-apiserver-certs\") pod \"calico-apiserver-6c6548768f-2jq2g\" (UID: \"e16e8353-7776-4b1a-a927-2ad9f0aeb669\") " pod="calico-apiserver/calico-apiserver-6c6548768f-2jq2g" Feb 13 16:07:24.066999 kubelet[3274]: I0213 16:07:24.063873 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjtc\" (UniqueName: \"kubernetes.io/projected/3ffc1870-253d-4caf-9e31-d298393f248f-kube-api-access-zwjtc\") pod \"calico-apiserver-6c6548768f-mmqrp\" (UID: \"3ffc1870-253d-4caf-9e31-d298393f248f\") " pod="calico-apiserver/calico-apiserver-6c6548768f-mmqrp" Feb 13 16:07:24.066999 kubelet[3274]: I0213 16:07:24.063920 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adce654e-c14f-4b9f-9441-d91c1f7ad783-config-volume\") pod \"coredns-7db6d8ff4d-rc9tv\" (UID: \"adce654e-c14f-4b9f-9441-d91c1f7ad783\") " pod="kube-system/coredns-7db6d8ff4d-rc9tv" Feb 13 16:07:24.066999 kubelet[3274]: I0213 16:07:24.063978 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c97f\" (UniqueName: \"kubernetes.io/projected/adce654e-c14f-4b9f-9441-d91c1f7ad783-kube-api-access-2c97f\") pod \"coredns-7db6d8ff4d-rc9tv\" (UID: \"adce654e-c14f-4b9f-9441-d91c1f7ad783\") " pod="kube-system/coredns-7db6d8ff4d-rc9tv" Feb 13 16:07:24.066999 kubelet[3274]: I0213 16:07:24.064047 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57cd\" (UniqueName: \"kubernetes.io/projected/e16e8353-7776-4b1a-a927-2ad9f0aeb669-kube-api-access-m57cd\") pod \"calico-apiserver-6c6548768f-2jq2g\" (UID: \"e16e8353-7776-4b1a-a927-2ad9f0aeb669\") " pod="calico-apiserver/calico-apiserver-6c6548768f-2jq2g" Feb 13 16:07:24.067888 kubelet[3274]: I0213 16:07:24.064093 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ffc1870-253d-4caf-9e31-d298393f248f-calico-apiserver-certs\") pod \"calico-apiserver-6c6548768f-mmqrp\" (UID: \"3ffc1870-253d-4caf-9e31-d298393f248f\") " pod="calico-apiserver/calico-apiserver-6c6548768f-mmqrp" Feb 13 16:07:24.067888 kubelet[3274]: I0213 16:07:24.064151 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab6612a-9cb7-4dd0-8057-abec92b485a5-tigera-ca-bundle\") pod \"calico-kube-controllers-59f7b46b7f-wxpjk\" (UID: \"7ab6612a-9cb7-4dd0-8057-abec92b485a5\") " pod="calico-system/calico-kube-controllers-59f7b46b7f-wxpjk" Feb 13 16:07:24.067888 kubelet[3274]: I0213 16:07:24.064201 3274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w7ks\" (UniqueName: \"kubernetes.io/projected/7ab6612a-9cb7-4dd0-8057-abec92b485a5-kube-api-access-5w7ks\") pod \"calico-kube-controllers-59f7b46b7f-wxpjk\" (UID: \"7ab6612a-9cb7-4dd0-8057-abec92b485a5\") " pod="calico-system/calico-kube-controllers-59f7b46b7f-wxpjk" Feb 13 16:07:24.068247 systemd[1]: Created slice kubepods-besteffort-pod3ffc1870_253d_4caf_9e31_d298393f248f.slice - libcontainer container kubepods-besteffort-pod3ffc1870_253d_4caf_9e31_d298393f248f.slice. Feb 13 16:07:24.088918 systemd[1]: Created slice kubepods-besteffort-pod7ab6612a_9cb7_4dd0_8057_abec92b485a5.slice - libcontainer container kubepods-besteffort-pod7ab6612a_9cb7_4dd0_8057_abec92b485a5.slice. Feb 13 16:07:24.346167 containerd[1932]: time="2025-02-13T16:07:24.345766815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-2jq2g,Uid:e16e8353-7776-4b1a-a927-2ad9f0aeb669,Namespace:calico-apiserver,Attempt:0,}" Feb 13 16:07:24.383749 containerd[1932]: time="2025-02-13T16:07:24.383656971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-mmqrp,Uid:3ffc1870-253d-4caf-9e31-d298393f248f,Namespace:calico-apiserver,Attempt:0,}" Feb 13 16:07:24.401788 containerd[1932]: time="2025-02-13T16:07:24.401679255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f7b46b7f-wxpjk,Uid:7ab6612a-9cb7-4dd0-8057-abec92b485a5,Namespace:calico-system,Attempt:0,}" Feb 13 16:07:24.521899 containerd[1932]: time="2025-02-13T16:07:24.521749276Z" level=info msg="shim disconnected" id=e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d namespace=k8s.io Feb 13 16:07:24.522521 containerd[1932]: time="2025-02-13T16:07:24.521833972Z" level=warning msg="cleaning up after shim disconnected" id=e97a3034536ea56520adaaf8315db795569d416a9ad7eaba958cb455c9af481d namespace=k8s.io Feb 13 16:07:24.522521 containerd[1932]: time="2025-02-13T16:07:24.522358924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:24.931240 containerd[1932]: time="2025-02-13T16:07:24.930985194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc9tv,Uid:adce654e-c14f-4b9f-9441-d91c1f7ad783,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:24.945347 containerd[1932]: time="2025-02-13T16:07:24.944356254Z" level=error msg="Failed to destroy network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:24.954571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5-shm.mount: Deactivated successfully. Feb 13 16:07:25.002864 containerd[1932]: time="2025-02-13T16:07:25.001539254Z" level=error msg="Failed to destroy network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.008332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161-shm.mount: Deactivated successfully. Feb 13 16:07:25.014957 containerd[1932]: time="2025-02-13T16:07:25.013146290Z" level=error msg="encountered an error cleaning up failed sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.014957 containerd[1932]: time="2025-02-13T16:07:25.013265294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f7b46b7f-wxpjk,Uid:7ab6612a-9cb7-4dd0-8057-abec92b485a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.015230 kubelet[3274]: E0213 16:07:25.014066 3274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.015230 kubelet[3274]: E0213 16:07:25.014159 3274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59f7b46b7f-wxpjk" Feb 13 16:07:25.015230 kubelet[3274]: E0213 16:07:25.014198 3274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59f7b46b7f-wxpjk" Feb 13 16:07:25.017207 containerd[1932]: time="2025-02-13T16:07:25.014863358Z" level=error msg="Failed to destroy network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.017290 kubelet[3274]: E0213 16:07:25.014265 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59f7b46b7f-wxpjk_calico-system(7ab6612a-9cb7-4dd0-8057-abec92b485a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59f7b46b7f-wxpjk_calico-system(7ab6612a-9cb7-4dd0-8057-abec92b485a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59f7b46b7f-wxpjk" podUID="7ab6612a-9cb7-4dd0-8057-abec92b485a5" Feb 13 16:07:25.024580 containerd[1932]: time="2025-02-13T16:07:25.024161450Z" level=error msg="encountered an error cleaning up failed sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.024580 containerd[1932]: time="2025-02-13T16:07:25.024370094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-2jq2g,Uid:e16e8353-7776-4b1a-a927-2ad9f0aeb669,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.025294 kubelet[3274]: E0213 16:07:25.025008 3274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.025294 kubelet[3274]: E0213 16:07:25.025088 3274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6548768f-2jq2g" Feb 13 16:07:25.025294 kubelet[3274]: E0213 16:07:25.025125 3274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6548768f-2jq2g" Feb 13 16:07:25.025618 kubelet[3274]: E0213 16:07:25.025204 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6548768f-2jq2g_calico-apiserver(e16e8353-7776-4b1a-a927-2ad9f0aeb669)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6548768f-2jq2g_calico-apiserver(e16e8353-7776-4b1a-a927-2ad9f0aeb669)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6548768f-2jq2g" podUID="e16e8353-7776-4b1a-a927-2ad9f0aeb669" Feb 13 16:07:25.031288 containerd[1932]: time="2025-02-13T16:07:25.031189442Z" level=error msg="encountered an error cleaning up failed sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.032052 containerd[1932]: time="2025-02-13T16:07:25.031298906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-mmqrp,Uid:3ffc1870-253d-4caf-9e31-d298393f248f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.032204 kubelet[3274]: E0213 16:07:25.031629 3274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.032204 kubelet[3274]: E0213 16:07:25.031709 3274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6548768f-mmqrp" Feb 13 16:07:25.034113 kubelet[3274]: E0213 16:07:25.033940 3274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6548768f-mmqrp" Feb 13 16:07:25.034113 kubelet[3274]: E0213 16:07:25.034038 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6548768f-mmqrp_calico-apiserver(3ffc1870-253d-4caf-9e31-d298393f248f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6548768f-mmqrp_calico-apiserver(3ffc1870-253d-4caf-9e31-d298393f248f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6548768f-mmqrp" podUID="3ffc1870-253d-4caf-9e31-d298393f248f" Feb 13 16:07:25.126249 systemd[1]: Started sshd@7-172.31.18.147:22-139.178.68.195:33686.service - OpenSSH per-connection server daemon (139.178.68.195:33686). Feb 13 16:07:25.144179 containerd[1932]: time="2025-02-13T16:07:25.144109239Z" level=error msg="Failed to destroy network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.144977 containerd[1932]: time="2025-02-13T16:07:25.144916167Z" level=error msg="encountered an error cleaning up failed sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.145234 containerd[1932]: time="2025-02-13T16:07:25.145189155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc9tv,Uid:adce654e-c14f-4b9f-9441-d91c1f7ad783,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.145788 kubelet[3274]: E0213 16:07:25.145701 3274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.146034 kubelet[3274]: E0213 16:07:25.145791 3274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rc9tv" Feb 13 16:07:25.146034 kubelet[3274]: E0213 16:07:25.145836 3274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rc9tv" Feb 13 16:07:25.146034 kubelet[3274]: E0213 16:07:25.145912 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rc9tv_kube-system(adce654e-c14f-4b9f-9441-d91c1f7ad783)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rc9tv_kube-system(adce654e-c14f-4b9f-9441-d91c1f7ad783)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rc9tv" podUID="adce654e-c14f-4b9f-9441-d91c1f7ad783" Feb 13 16:07:25.203571 containerd[1932]: time="2025-02-13T16:07:25.203255283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dg565,Uid:01de8ba8-2672-4a12-9409-8ec82372a335,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:25.316308 sshd[4226]: Accepted publickey for core from 139.178.68.195 port 33686 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:25.321739 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:25.324423 containerd[1932]: time="2025-02-13T16:07:25.323928892Z" level=error msg="Failed to destroy network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.325660 containerd[1932]: time="2025-02-13T16:07:25.324850984Z" level=error msg="encountered an error cleaning up failed sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.325660 containerd[1932]: time="2025-02-13T16:07:25.325125688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dg565,Uid:01de8ba8-2672-4a12-9409-8ec82372a335,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.326566 kubelet[3274]: E0213 16:07:25.325916 3274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.326566 kubelet[3274]: E0213 16:07:25.326011 3274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dg565" Feb 13 16:07:25.326566 kubelet[3274]: E0213 16:07:25.326046 3274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dg565" Feb 13 16:07:25.327165 kubelet[3274]: E0213 16:07:25.326106 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dg565_kube-system(01de8ba8-2672-4a12-9409-8ec82372a335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dg565_kube-system(01de8ba8-2672-4a12-9409-8ec82372a335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dg565" podUID="01de8ba8-2672-4a12-9409-8ec82372a335" Feb 13 16:07:25.336248 systemd-logind[1913]: New session 8 of user core. Feb 13 16:07:25.349848 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:07:25.428666 systemd[1]: Created slice kubepods-besteffort-pod8ecaa1f9_7c73_451d_b745_a3b442214800.slice - libcontainer container kubepods-besteffort-pod8ecaa1f9_7c73_451d_b745_a3b442214800.slice. Feb 13 16:07:25.435199 containerd[1932]: time="2025-02-13T16:07:25.435123556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gphn8,Uid:8ecaa1f9-7c73-451d-b745-a3b442214800,Namespace:calico-system,Attempt:0,}" Feb 13 16:07:25.623129 containerd[1932]: time="2025-02-13T16:07:25.622558313Z" level=error msg="Failed to destroy network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.623312 containerd[1932]: time="2025-02-13T16:07:25.623233721Z" level=error msg="encountered an error cleaning up failed sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.623401 containerd[1932]: time="2025-02-13T16:07:25.623329325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gphn8,Uid:8ecaa1f9-7c73-451d-b745-a3b442214800,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.623764 kubelet[3274]: E0213 16:07:25.623701 3274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.623905 kubelet[3274]: E0213 16:07:25.623789 3274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gphn8" Feb 13 16:07:25.623905 kubelet[3274]: E0213 16:07:25.623824 3274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gphn8" Feb 13 16:07:25.624022 kubelet[3274]: E0213 16:07:25.623907 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gphn8_calico-system(8ecaa1f9-7c73-451d-b745-a3b442214800)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gphn8_calico-system(8ecaa1f9-7c73-451d-b745-a3b442214800)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:25.666127 kubelet[3274]: I0213 16:07:25.664248 3274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:25.671050 containerd[1932]: time="2025-02-13T16:07:25.669621270Z" level=info msg="StopPodSandbox for \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\"" Feb 13 16:07:25.671050 containerd[1932]: time="2025-02-13T16:07:25.670005270Z" level=info msg="Ensure that sandbox 23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241 in task-service has been cleanup successfully" Feb 13 16:07:25.675067 kubelet[3274]: I0213 16:07:25.673750 3274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:25.676244 containerd[1932]: time="2025-02-13T16:07:25.674976738Z" level=info msg="StopPodSandbox for \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\"" Feb 13 16:07:25.679857 containerd[1932]: time="2025-02-13T16:07:25.679439166Z" level=info msg="Ensure that sandbox e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5 in task-service has been cleanup successfully" Feb 13 16:07:25.689109 kubelet[3274]: I0213 16:07:25.688629 3274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:25.690092 containerd[1932]: time="2025-02-13T16:07:25.690012618Z" level=info msg="StopPodSandbox for \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\"" Feb 13 16:07:25.692313 containerd[1932]: time="2025-02-13T16:07:25.691946226Z" level=info msg="Ensure that sandbox 01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa in task-service has been cleanup successfully" Feb 13 16:07:25.695138 sshd[4226]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:25.700896 kubelet[3274]: I0213 16:07:25.698300 3274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:25.705119 systemd[1]: sshd@7-172.31.18.147:22-139.178.68.195:33686.service: Deactivated successfully. Feb 13 16:07:25.716840 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:07:25.718720 containerd[1932]: time="2025-02-13T16:07:25.718184682Z" level=info msg="StopPodSandbox for \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\"" Feb 13 16:07:25.718720 containerd[1932]: time="2025-02-13T16:07:25.718594782Z" level=info msg="Ensure that sandbox 46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894 in task-service has been cleanup successfully" Feb 13 16:07:25.721201 kubelet[3274]: I0213 16:07:25.720567 3274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:25.728748 systemd-logind[1913]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:07:25.732773 containerd[1932]: time="2025-02-13T16:07:25.732698010Z" level=info msg="StopPodSandbox for \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\"" Feb 13 16:07:25.733604 containerd[1932]: time="2025-02-13T16:07:25.733049274Z" level=info msg="Ensure that sandbox 5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161 in task-service has been cleanup successfully" Feb 13 16:07:25.734346 systemd-logind[1913]: Removed session 8. Feb 13 16:07:25.772972 containerd[1932]: time="2025-02-13T16:07:25.772252206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 16:07:25.789972 kubelet[3274]: I0213 16:07:25.788063 3274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:25.798146 containerd[1932]: time="2025-02-13T16:07:25.798093234Z" level=info msg="StopPodSandbox for \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\"" Feb 13 16:07:25.801024 containerd[1932]: time="2025-02-13T16:07:25.800266542Z" level=info msg="Ensure that sandbox 7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847 in task-service has been cleanup successfully" Feb 13 16:07:25.884871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241-shm.mount: Deactivated successfully. Feb 13 16:07:25.885361 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa-shm.mount: Deactivated successfully. Feb 13 16:07:25.930297 containerd[1932]: time="2025-02-13T16:07:25.930203251Z" level=error msg="StopPodSandbox for \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\" failed" error="failed to destroy network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.937654 kubelet[3274]: E0213 16:07:25.937051 3274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:25.937654 kubelet[3274]: E0213 16:07:25.937196 3274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa"} Feb 13 16:07:25.937654 kubelet[3274]: E0213 16:07:25.937358 3274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ffc1870-253d-4caf-9e31-d298393f248f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:07:25.937654 kubelet[3274]: E0213 16:07:25.937444 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ffc1870-253d-4caf-9e31-d298393f248f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6548768f-mmqrp" podUID="3ffc1870-253d-4caf-9e31-d298393f248f" Feb 13 16:07:25.943381 containerd[1932]: time="2025-02-13T16:07:25.942776755Z" level=error msg="StopPodSandbox for \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\" failed" error="failed to destroy network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.944518 kubelet[3274]: E0213 16:07:25.943139 3274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:25.944518 kubelet[3274]: E0213 16:07:25.943211 3274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241"} Feb 13 16:07:25.944518 kubelet[3274]: E0213 16:07:25.943265 3274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"adce654e-c14f-4b9f-9441-d91c1f7ad783\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:07:25.944518 kubelet[3274]: E0213 16:07:25.943305 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"adce654e-c14f-4b9f-9441-d91c1f7ad783\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rc9tv" podUID="adce654e-c14f-4b9f-9441-d91c1f7ad783" Feb 13 16:07:25.969302 containerd[1932]: time="2025-02-13T16:07:25.968722999Z" level=error msg="StopPodSandbox for \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\" failed" error="failed to destroy network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:25.970624 kubelet[3274]: E0213 16:07:25.969035 3274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:25.970624 kubelet[3274]: E0213 16:07:25.969108 3274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5"} Feb 13 16:07:25.970624 kubelet[3274]: E0213 16:07:25.969164 3274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ab6612a-9cb7-4dd0-8057-abec92b485a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:07:25.970624 kubelet[3274]: E0213 16:07:25.969205 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ab6612a-9cb7-4dd0-8057-abec92b485a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59f7b46b7f-wxpjk" podUID="7ab6612a-9cb7-4dd0-8057-abec92b485a5" Feb 13 16:07:26.003783 containerd[1932]: time="2025-02-13T16:07:26.003693651Z" level=error msg="StopPodSandbox for \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\" failed" error="failed to destroy network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:26.004394 kubelet[3274]: E0213 16:07:26.004345 3274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:26.004882 kubelet[3274]: E0213 16:07:26.004616 3274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161"} Feb 13 16:07:26.004882 kubelet[3274]: E0213 16:07:26.004728 3274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e16e8353-7776-4b1a-a927-2ad9f0aeb669\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:07:26.004882 kubelet[3274]: E0213 16:07:26.004807 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e16e8353-7776-4b1a-a927-2ad9f0aeb669\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6548768f-2jq2g" podUID="e16e8353-7776-4b1a-a927-2ad9f0aeb669" Feb 13 16:07:26.005457 containerd[1932]: time="2025-02-13T16:07:26.005380479Z" level=error msg="StopPodSandbox for \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\" failed" error="failed to destroy network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:26.006070 kubelet[3274]: E0213 16:07:26.005856 3274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:26.006070 kubelet[3274]: E0213 16:07:26.005922 3274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894"} Feb 13 16:07:26.006070 kubelet[3274]: E0213 16:07:26.005980 3274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01de8ba8-2672-4a12-9409-8ec82372a335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:07:26.006070 kubelet[3274]: E0213 16:07:26.006019 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01de8ba8-2672-4a12-9409-8ec82372a335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dg565" podUID="01de8ba8-2672-4a12-9409-8ec82372a335" Feb 13 16:07:26.007650 containerd[1932]: time="2025-02-13T16:07:26.007558071Z" level=error msg="StopPodSandbox for \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\" failed" error="failed to destroy network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:07:26.008562 kubelet[3274]: E0213 16:07:26.008225 3274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:26.008562 kubelet[3274]: E0213 16:07:26.008297 3274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847"} Feb 13 16:07:26.008562 kubelet[3274]: E0213 16:07:26.008353 3274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ecaa1f9-7c73-451d-b745-a3b442214800\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:07:26.008562 kubelet[3274]: E0213 16:07:26.008393 3274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ecaa1f9-7c73-451d-b745-a3b442214800\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gphn8" podUID="8ecaa1f9-7c73-451d-b745-a3b442214800" Feb 13 16:07:30.738093 systemd[1]: Started sshd@8-172.31.18.147:22-139.178.68.195:46086.service - OpenSSH per-connection server daemon (139.178.68.195:46086). Feb 13 16:07:30.928516 sshd[4424]: Accepted publickey for core from 139.178.68.195 port 46086 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:30.933333 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:30.948335 systemd-logind[1913]: New session 9 of user core. Feb 13 16:07:30.956841 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:07:31.315970 sshd[4424]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:31.325819 systemd[1]: sshd@8-172.31.18.147:22-139.178.68.195:46086.service: Deactivated successfully. Feb 13 16:07:31.336104 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:07:31.339617 systemd-logind[1913]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:07:31.343137 systemd-logind[1913]: Removed session 9. Feb 13 16:07:33.143005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790012225.mount: Deactivated successfully. Feb 13 16:07:33.229446 containerd[1932]: time="2025-02-13T16:07:33.228532427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:33.231032 containerd[1932]: time="2025-02-13T16:07:33.230452691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 16:07:33.233238 containerd[1932]: time="2025-02-13T16:07:33.233142383Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:33.240949 containerd[1932]: time="2025-02-13T16:07:33.240853055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:33.242949 containerd[1932]: time="2025-02-13T16:07:33.242678555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.469775757s" Feb 13 16:07:33.242949 containerd[1932]: time="2025-02-13T16:07:33.242760827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 16:07:33.289411 containerd[1932]: time="2025-02-13T16:07:33.289115603Z" level=info msg="CreateContainer within sandbox \"45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 16:07:33.344023 containerd[1932]: time="2025-02-13T16:07:33.343830528Z" level=info msg="CreateContainer within sandbox \"45423ae8296f1de0774b58e7486ccfb6f76f587445145e9c2360df30f7171520\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"366740ddf50311bed7964203628f030473552c32598558b981995b209a47abe7\"" Feb 13 16:07:33.345044 containerd[1932]: time="2025-02-13T16:07:33.344884920Z" level=info msg="StartContainer for \"366740ddf50311bed7964203628f030473552c32598558b981995b209a47abe7\"" Feb 13 16:07:33.404032 systemd[1]: Started cri-containerd-366740ddf50311bed7964203628f030473552c32598558b981995b209a47abe7.scope - libcontainer container 366740ddf50311bed7964203628f030473552c32598558b981995b209a47abe7. Feb 13 16:07:33.500760 containerd[1932]: time="2025-02-13T16:07:33.500216856Z" level=info msg="StartContainer for \"366740ddf50311bed7964203628f030473552c32598558b981995b209a47abe7\" returns successfully" Feb 13 16:07:33.624523 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 16:07:33.624708 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 16:07:34.890956 systemd[1]: run-containerd-runc-k8s.io-366740ddf50311bed7964203628f030473552c32598558b981995b209a47abe7-runc.8oMkVL.mount: Deactivated successfully. Feb 13 16:07:36.134525 kernel: bpftool[4670]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 16:07:36.359817 systemd[1]: Started sshd@9-172.31.18.147:22-139.178.68.195:46090.service - OpenSSH per-connection server daemon (139.178.68.195:46090). Feb 13 16:07:36.470571 (udev-worker)[4477]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:07:36.471187 systemd-networkd[1832]: vxlan.calico: Link UP Feb 13 16:07:36.471195 systemd-networkd[1832]: vxlan.calico: Gained carrier Feb 13 16:07:36.524892 (udev-worker)[4479]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:07:36.621362 sshd[4678]: Accepted publickey for core from 139.178.68.195 port 46090 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:36.632434 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:36.649989 systemd-logind[1913]: New session 10 of user core. Feb 13 16:07:36.656840 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:07:37.027074 sshd[4678]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:37.036630 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:07:37.040718 systemd[1]: sshd@9-172.31.18.147:22-139.178.68.195:46090.service: Deactivated successfully. Feb 13 16:07:37.051222 systemd-logind[1913]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:07:37.078074 systemd[1]: Started sshd@10-172.31.18.147:22-139.178.68.195:42300.service - OpenSSH per-connection server daemon (139.178.68.195:42300). Feb 13 16:07:37.081942 systemd-logind[1913]: Removed session 10. Feb 13 16:07:37.266770 sshd[4752]: Accepted publickey for core from 139.178.68.195 port 42300 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:37.269869 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:37.280705 systemd-logind[1913]: New session 11 of user core. Feb 13 16:07:37.287852 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:07:37.419779 containerd[1932]: time="2025-02-13T16:07:37.418727368Z" level=info msg="StopPodSandbox for \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\"" Feb 13 16:07:37.646227 kubelet[3274]: I0213 16:07:37.646035 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mlcn4" podStartSLOduration=6.927075563 podStartE2EDuration="26.646003829s" podCreationTimestamp="2025-02-13 16:07:11 +0000 UTC" firstStartedPulling="2025-02-13 16:07:13.531510593 +0000 UTC m=+27.464080985" lastFinishedPulling="2025-02-13 16:07:33.250438859 +0000 UTC m=+47.183009251" observedRunningTime="2025-02-13 16:07:33.907954371 +0000 UTC m=+47.840524787" watchObservedRunningTime="2025-02-13 16:07:37.646003829 +0000 UTC m=+51.578574317" Feb 13 16:07:37.767607 sshd[4752]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:37.781299 systemd[1]: sshd@10-172.31.18.147:22-139.178.68.195:42300.service: Deactivated successfully. Feb 13 16:07:37.788209 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:07:37.825871 systemd-logind[1913]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:07:37.832991 systemd[1]: Started sshd@11-172.31.18.147:22-139.178.68.195:42302.service - OpenSSH per-connection server daemon (139.178.68.195:42302). Feb 13 16:07:37.847649 systemd-logind[1913]: Removed session 11. Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.628 [INFO][4776] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.629 [INFO][4776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" iface="eth0" netns="/var/run/netns/cni-efca4e85-c8ce-5ac4-c119-ef033df224fe" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.633 [INFO][4776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" iface="eth0" netns="/var/run/netns/cni-efca4e85-c8ce-5ac4-c119-ef033df224fe" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.643 [INFO][4776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" iface="eth0" netns="/var/run/netns/cni-efca4e85-c8ce-5ac4-c119-ef033df224fe" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.644 [INFO][4776] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.644 [INFO][4776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.714 [INFO][4782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.715 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.715 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.774 [WARNING][4782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.774 [INFO][4782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.817 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:37.851374 containerd[1932]: 2025-02-13 16:07:37.835 [INFO][4776] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:37.865699 systemd[1]: run-netns-cni\x2defca4e85\x2dc8ce\x2d5ac4\x2dc119\x2def033df224fe.mount: Deactivated successfully. Feb 13 16:07:37.867931 containerd[1932]: time="2025-02-13T16:07:37.866898798Z" level=info msg="TearDown network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\" successfully" Feb 13 16:07:37.867931 containerd[1932]: time="2025-02-13T16:07:37.866955162Z" level=info msg="StopPodSandbox for \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\" returns successfully" Feb 13 16:07:37.878257 containerd[1932]: time="2025-02-13T16:07:37.876354606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f7b46b7f-wxpjk,Uid:7ab6612a-9cb7-4dd0-8057-abec92b485a5,Namespace:calico-system,Attempt:1,}" Feb 13 16:07:38.083514 sshd[4791]: Accepted publickey for core from 139.178.68.195 port 42302 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:38.084372 sshd[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:38.093915 systemd-networkd[1832]: vxlan.calico: Gained IPv6LL Feb 13 16:07:38.101931 systemd-logind[1913]: New session 12 of user core. Feb 13 16:07:38.112104 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:07:38.200546 systemd-networkd[1832]: calidb262736052: Link UP Feb 13 16:07:38.202721 systemd-networkd[1832]: calidb262736052: Gained carrier Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.034 [INFO][4794] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0 calico-kube-controllers-59f7b46b7f- calico-system 7ab6612a-9cb7-4dd0-8057-abec92b485a5 870 0 2025-02-13 16:07:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59f7b46b7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-147 calico-kube-controllers-59f7b46b7f-wxpjk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidb262736052 [] []}} ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.035 [INFO][4794] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.091 [INFO][4805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" HandleID="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.138 [INFO][4805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" HandleID="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-147", "pod":"calico-kube-controllers-59f7b46b7f-wxpjk", "timestamp":"2025-02-13 16:07:38.091183731 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.138 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.138 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.138 [INFO][4805] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.141 [INFO][4805] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.150 [INFO][4805] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.158 [INFO][4805] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.161 [INFO][4805] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.164 [INFO][4805] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.165 [INFO][4805] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.168 [INFO][4805] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3 Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.177 [INFO][4805] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.187 [INFO][4805] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.129/26] block=192.168.83.128/26 handle="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.188 [INFO][4805] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.129/26] handle="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" host="ip-172-31-18-147" Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.188 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:38.233523 containerd[1932]: 2025-02-13 16:07:38.188 [INFO][4805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.129/26] IPv6=[] ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" HandleID="k8s-pod-network.ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:38.240032 containerd[1932]: 2025-02-13 16:07:38.192 [INFO][4794] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0", GenerateName:"calico-kube-controllers-59f7b46b7f-", Namespace:"calico-system", SelfLink:"", UID:"7ab6612a-9cb7-4dd0-8057-abec92b485a5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f7b46b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"calico-kube-controllers-59f7b46b7f-wxpjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb262736052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:38.240032 containerd[1932]: 2025-02-13 16:07:38.192 [INFO][4794] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.129/32] ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:38.240032 containerd[1932]: 2025-02-13 16:07:38.193 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb262736052 ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:38.240032 containerd[1932]: 2025-02-13 16:07:38.203 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:38.240032 containerd[1932]: 2025-02-13 16:07:38.204 [INFO][4794] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0", GenerateName:"calico-kube-controllers-59f7b46b7f-", Namespace:"calico-system", SelfLink:"", UID:"7ab6612a-9cb7-4dd0-8057-abec92b485a5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f7b46b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3", Pod:"calico-kube-controllers-59f7b46b7f-wxpjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb262736052", MAC:"62:21:6c:fd:da:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:38.240032 containerd[1932]: 2025-02-13 16:07:38.228 [INFO][4794] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3" Namespace="calico-system" Pod="calico-kube-controllers-59f7b46b7f-wxpjk" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:38.312608 containerd[1932]: time="2025-02-13T16:07:38.312283048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:38.312608 containerd[1932]: time="2025-02-13T16:07:38.312407284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:38.312608 containerd[1932]: time="2025-02-13T16:07:38.312445660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:38.315093 containerd[1932]: time="2025-02-13T16:07:38.314714476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:38.403551 systemd[1]: Started cri-containerd-ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3.scope - libcontainer container ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3. Feb 13 16:07:38.429849 containerd[1932]: time="2025-02-13T16:07:38.429620501Z" level=info msg="StopPodSandbox for \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\"" Feb 13 16:07:38.433506 containerd[1932]: time="2025-02-13T16:07:38.429622289Z" level=info msg="StopPodSandbox for \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\"" Feb 13 16:07:38.521440 sshd[4791]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:38.533866 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:07:38.539325 systemd[1]: sshd@11-172.31.18.147:22-139.178.68.195:42302.service: Deactivated successfully. Feb 13 16:07:38.550246 systemd-logind[1913]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:07:38.553736 systemd-logind[1913]: Removed session 12. Feb 13 16:07:38.617557 containerd[1932]: time="2025-02-13T16:07:38.617225922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f7b46b7f-wxpjk,Uid:7ab6612a-9cb7-4dd0-8057-abec92b485a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3\"" Feb 13 16:07:38.624580 containerd[1932]: time="2025-02-13T16:07:38.624447858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.642 [INFO][4890] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.642 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" iface="eth0" netns="/var/run/netns/cni-14ccbbad-66a7-09d8-8126-22b5fbbbd4be" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.643 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" iface="eth0" netns="/var/run/netns/cni-14ccbbad-66a7-09d8-8126-22b5fbbbd4be" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.644 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" iface="eth0" netns="/var/run/netns/cni-14ccbbad-66a7-09d8-8126-22b5fbbbd4be" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.644 [INFO][4890] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.644 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.718 [INFO][4916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.720 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.720 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.739 [WARNING][4916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.739 [INFO][4916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.742 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:38.748602 containerd[1932]: 2025-02-13 16:07:38.744 [INFO][4890] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:38.750111 containerd[1932]: time="2025-02-13T16:07:38.750033919Z" level=info msg="TearDown network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\" successfully" Feb 13 16:07:38.750111 containerd[1932]: time="2025-02-13T16:07:38.750098791Z" level=info msg="StopPodSandbox for \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\" returns successfully" Feb 13 16:07:38.752735 containerd[1932]: time="2025-02-13T16:07:38.752200471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dg565,Uid:01de8ba8-2672-4a12-9409-8ec82372a335,Namespace:kube-system,Attempt:1,}" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.677 [INFO][4898] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.678 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" iface="eth0" netns="/var/run/netns/cni-ea3ede4d-8282-f5d5-18da-74deaa61e4e7" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.679 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" iface="eth0" netns="/var/run/netns/cni-ea3ede4d-8282-f5d5-18da-74deaa61e4e7" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.679 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" iface="eth0" netns="/var/run/netns/cni-ea3ede4d-8282-f5d5-18da-74deaa61e4e7" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.679 [INFO][4898] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.679 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.737 [INFO][4920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.737 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.742 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.762 [WARNING][4920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.762 [INFO][4920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.767 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:38.775353 containerd[1932]: 2025-02-13 16:07:38.771 [INFO][4898] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:38.779190 containerd[1932]: time="2025-02-13T16:07:38.776159899Z" level=info msg="TearDown network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\" successfully" Feb 13 16:07:38.779190 containerd[1932]: time="2025-02-13T16:07:38.776226271Z" level=info msg="StopPodSandbox for \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\" returns successfully" Feb 13 16:07:38.779190 containerd[1932]: time="2025-02-13T16:07:38.778522495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gphn8,Uid:8ecaa1f9-7c73-451d-b745-a3b442214800,Namespace:calico-system,Attempt:1,}" Feb 13 16:07:38.882199 systemd[1]: run-netns-cni\x2dea3ede4d\x2d8282\x2df5d5\x2d18da\x2d74deaa61e4e7.mount: Deactivated successfully. Feb 13 16:07:38.885357 systemd[1]: run-netns-cni\x2d14ccbbad\x2d66a7\x2d09d8\x2d8126\x2d22b5fbbbd4be.mount: Deactivated successfully. Feb 13 16:07:39.118052 systemd-networkd[1832]: calia27165a9ba0: Link UP Feb 13 16:07:39.122130 systemd-networkd[1832]: calia27165a9ba0: Gained carrier Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:38.922 [INFO][4929] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0 coredns-7db6d8ff4d- kube-system 01de8ba8-2672-4a12-9409-8ec82372a335 895 0 2025-02-13 16:06:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-147 coredns-7db6d8ff4d-dg565 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia27165a9ba0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:38.923 [INFO][4929] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.010 [INFO][4951] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" HandleID="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.037 [INFO][4951] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" HandleID="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400023cb50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-147", "pod":"coredns-7db6d8ff4d-dg565", "timestamp":"2025-02-13 16:07:39.010540804 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.038 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.038 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.039 [INFO][4951] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.046 [INFO][4951] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.056 [INFO][4951] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.068 [INFO][4951] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.072 [INFO][4951] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.077 [INFO][4951] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.077 [INFO][4951] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.080 [INFO][4951] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0 Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.087 [INFO][4951] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.103 [INFO][4951] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.130/26] block=192.168.83.128/26 handle="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.104 [INFO][4951] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.130/26] handle="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" host="ip-172-31-18-147" Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.105 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:39.158777 containerd[1932]: 2025-02-13 16:07:39.105 [INFO][4951] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.130/26] IPv6=[] ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" HandleID="k8s-pod-network.11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:39.164737 containerd[1932]: 2025-02-13 16:07:39.110 [INFO][4929] cni-plugin/k8s.go 386: Populated endpoint ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01de8ba8-2672-4a12-9409-8ec82372a335", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"coredns-7db6d8ff4d-dg565", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia27165a9ba0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:39.164737 containerd[1932]: 2025-02-13 16:07:39.110 [INFO][4929] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.130/32] ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:39.164737 containerd[1932]: 2025-02-13 16:07:39.110 [INFO][4929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia27165a9ba0 ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:39.164737 containerd[1932]: 2025-02-13 16:07:39.120 [INFO][4929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:39.164737 containerd[1932]: 2025-02-13 16:07:39.123 [INFO][4929] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01de8ba8-2672-4a12-9409-8ec82372a335", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0", Pod:"coredns-7db6d8ff4d-dg565", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia27165a9ba0", MAC:"c6:4a:83:79:1b:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:39.164737 containerd[1932]: 2025-02-13 16:07:39.153 [INFO][4929] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dg565" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:39.263033 systemd-networkd[1832]: calic1453551974: Link UP Feb 13 16:07:39.273254 systemd-networkd[1832]: calic1453551974: Gained carrier Feb 13 16:07:39.286333 containerd[1932]: time="2025-02-13T16:07:39.284282477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:39.286333 containerd[1932]: time="2025-02-13T16:07:39.284551841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:39.286333 containerd[1932]: time="2025-02-13T16:07:39.284617601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:39.286333 containerd[1932]: time="2025-02-13T16:07:39.285060221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:38.959 [INFO][4939] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0 csi-node-driver- calico-system 8ecaa1f9-7c73-451d-b745-a3b442214800 896 0 2025-02-13 16:07:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-147 csi-node-driver-gphn8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic1453551974 [] []}} ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:38.960 [INFO][4939] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.047 [INFO][4956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" HandleID="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.070 [INFO][4956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" HandleID="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-147", "pod":"csi-node-driver-gphn8", "timestamp":"2025-02-13 16:07:39.047409856 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.071 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.105 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.105 [INFO][4956] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.111 [INFO][4956] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.157 [INFO][4956] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.174 [INFO][4956] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.180 [INFO][4956] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.192 [INFO][4956] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.192 [INFO][4956] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.200 [INFO][4956] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.218 [INFO][4956] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.233 [INFO][4956] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.131/26] block=192.168.83.128/26 handle="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.233 [INFO][4956] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.131/26] handle="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" host="ip-172-31-18-147" Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.233 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:39.335892 containerd[1932]: 2025-02-13 16:07:39.233 [INFO][4956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.131/26] IPv6=[] ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" HandleID="k8s-pod-network.138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:39.337662 containerd[1932]: 2025-02-13 16:07:39.242 [INFO][4939] cni-plugin/k8s.go 386: Populated endpoint ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ecaa1f9-7c73-451d-b745-a3b442214800", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"csi-node-driver-gphn8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1453551974", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:39.337662 containerd[1932]: 2025-02-13 16:07:39.243 [INFO][4939] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.131/32] ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:39.337662 containerd[1932]: 2025-02-13 16:07:39.243 [INFO][4939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1453551974 ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:39.337662 containerd[1932]: 2025-02-13 16:07:39.275 [INFO][4939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:39.337662 containerd[1932]: 2025-02-13 16:07:39.288 [INFO][4939] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ecaa1f9-7c73-451d-b745-a3b442214800", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b", Pod:"csi-node-driver-gphn8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1453551974", MAC:"2a:c7:4d:da:3f:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:39.337662 containerd[1932]: 2025-02-13 16:07:39.330 [INFO][4939] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b" Namespace="calico-system" Pod="csi-node-driver-gphn8" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:39.381449 systemd[1]: run-containerd-runc-k8s.io-11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0-runc.8vziqQ.mount: Deactivated successfully. Feb 13 16:07:39.398914 systemd[1]: Started cri-containerd-11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0.scope - libcontainer container 11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0. Feb 13 16:07:39.415992 containerd[1932]: time="2025-02-13T16:07:39.415554234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:39.415992 containerd[1932]: time="2025-02-13T16:07:39.415674426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:39.421888 containerd[1932]: time="2025-02-13T16:07:39.415712226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:39.423901 containerd[1932]: time="2025-02-13T16:07:39.422594730Z" level=info msg="StopPodSandbox for \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\"" Feb 13 16:07:39.423901 containerd[1932]: time="2025-02-13T16:07:39.422628342Z" level=info msg="StopPodSandbox for \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\"" Feb 13 16:07:39.424407 containerd[1932]: time="2025-02-13T16:07:39.423578010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:39.509170 systemd[1]: Started cri-containerd-138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b.scope - libcontainer container 138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b. Feb 13 16:07:39.636043 containerd[1932]: time="2025-02-13T16:07:39.635889547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dg565,Uid:01de8ba8-2672-4a12-9409-8ec82372a335,Namespace:kube-system,Attempt:1,} returns sandbox id \"11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0\"" Feb 13 16:07:39.648293 containerd[1932]: time="2025-02-13T16:07:39.646747051Z" level=info msg="CreateContainer within sandbox \"11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:07:39.658706 containerd[1932]: time="2025-02-13T16:07:39.658576291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gphn8,Uid:8ecaa1f9-7c73-451d-b745-a3b442214800,Namespace:calico-system,Attempt:1,} returns sandbox id \"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b\"" Feb 13 16:07:39.690386 containerd[1932]: time="2025-02-13T16:07:39.689805259Z" level=info msg="CreateContainer within sandbox \"11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c088f393ec58a76228c35ba9cd36730769d8cbf6913d100f8a297ca2307f6c71\"" Feb 13 16:07:39.694381 containerd[1932]: time="2025-02-13T16:07:39.694244455Z" level=info msg="StartContainer for \"c088f393ec58a76228c35ba9cd36730769d8cbf6913d100f8a297ca2307f6c71\"" Feb 13 16:07:39.799448 systemd[1]: Started cri-containerd-c088f393ec58a76228c35ba9cd36730769d8cbf6913d100f8a297ca2307f6c71.scope - libcontainer container c088f393ec58a76228c35ba9cd36730769d8cbf6913d100f8a297ca2307f6c71. Feb 13 16:07:39.909584 containerd[1932]: time="2025-02-13T16:07:39.909134024Z" level=info msg="StartContainer for \"c088f393ec58a76228c35ba9cd36730769d8cbf6913d100f8a297ca2307f6c71\" returns successfully" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.748 [INFO][5087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.748 [INFO][5087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" iface="eth0" netns="/var/run/netns/cni-447ce120-9d6b-4e94-267f-4e029af1209b" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.749 [INFO][5087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" iface="eth0" netns="/var/run/netns/cni-447ce120-9d6b-4e94-267f-4e029af1209b" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.751 [INFO][5087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" iface="eth0" netns="/var/run/netns/cni-447ce120-9d6b-4e94-267f-4e029af1209b" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.751 [INFO][5087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.751 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.859 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.859 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.859 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.887 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.887 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.894 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:39.913305 containerd[1932]: 2025-02-13 16:07:39.906 [INFO][5087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:39.929905 containerd[1932]: time="2025-02-13T16:07:39.922791800Z" level=info msg="TearDown network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\" successfully" Feb 13 16:07:39.929905 containerd[1932]: time="2025-02-13T16:07:39.922856372Z" level=info msg="StopPodSandbox for \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\" returns successfully" Feb 13 16:07:39.934149 systemd[1]: run-netns-cni\x2d447ce120\x2d9d6b\x2d4e94\x2d267f\x2d4e029af1209b.mount: Deactivated successfully. Feb 13 16:07:39.938114 containerd[1932]: time="2025-02-13T16:07:39.936879752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-mmqrp,Uid:3ffc1870-253d-4caf-9e31-d298393f248f,Namespace:calico-apiserver,Attempt:1,}" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.737 [INFO][5074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.737 [INFO][5074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" iface="eth0" netns="/var/run/netns/cni-ebcd4287-1742-aa8c-28f8-c4145319ea47" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.740 [INFO][5074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" iface="eth0" netns="/var/run/netns/cni-ebcd4287-1742-aa8c-28f8-c4145319ea47" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.744 [INFO][5074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" iface="eth0" netns="/var/run/netns/cni-ebcd4287-1742-aa8c-28f8-c4145319ea47" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.745 [INFO][5074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.745 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.900 [INFO][5129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.900 [INFO][5129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.900 [INFO][5129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.953 [WARNING][5129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.954 [INFO][5129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.962 [INFO][5129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:39.992480 containerd[1932]: 2025-02-13 16:07:39.974 [INFO][5074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:39.992480 containerd[1932]: time="2025-02-13T16:07:39.991588881Z" level=info msg="TearDown network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\" successfully" Feb 13 16:07:39.992480 containerd[1932]: time="2025-02-13T16:07:39.991638177Z" level=info msg="StopPodSandbox for \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\" returns successfully" Feb 13 16:07:40.001891 containerd[1932]: time="2025-02-13T16:07:39.997797969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc9tv,Uid:adce654e-c14f-4b9f-9441-d91c1f7ad783,Namespace:kube-system,Attempt:1,}" Feb 13 16:07:40.012157 systemd[1]: run-netns-cni\x2debcd4287\x2d1742\x2daa8c\x2d28f8\x2dc4145319ea47.mount: Deactivated successfully. Feb 13 16:07:40.205735 systemd-networkd[1832]: calidb262736052: Gained IPv6LL Feb 13 16:07:40.419713 systemd-networkd[1832]: cali222c8120566: Link UP Feb 13 16:07:40.424440 systemd-networkd[1832]: cali222c8120566: Gained carrier Feb 13 16:07:40.429845 containerd[1932]: time="2025-02-13T16:07:40.429726355Z" level=info msg="StopPodSandbox for \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\"" Feb 13 16:07:40.492431 kubelet[3274]: I0213 16:07:40.491207 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dg565" podStartSLOduration=41.491176567 podStartE2EDuration="41.491176567s" podCreationTimestamp="2025-02-13 16:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:40.025177109 +0000 UTC m=+53.957747621" watchObservedRunningTime="2025-02-13 16:07:40.491176567 +0000 UTC m=+54.423746959" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.179 [INFO][5163] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0 calico-apiserver-6c6548768f- calico-apiserver 3ffc1870-253d-4caf-9e31-d298393f248f 912 0 2025-02-13 16:07:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6548768f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-147 calico-apiserver-6c6548768f-mmqrp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali222c8120566 [] []}} ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.179 [INFO][5163] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.286 [INFO][5189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" HandleID="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.323 [INFO][5189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" HandleID="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037cb20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-147", "pod":"calico-apiserver-6c6548768f-mmqrp", "timestamp":"2025-02-13 16:07:40.286490502 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.324 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.324 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.324 [INFO][5189] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.328 [INFO][5189] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.339 [INFO][5189] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.351 [INFO][5189] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.363 [INFO][5189] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.374 [INFO][5189] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.375 [INFO][5189] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.379 [INFO][5189] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4 Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.389 [INFO][5189] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.404 [INFO][5189] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.132/26] block=192.168.83.128/26 handle="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.404 [INFO][5189] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.132/26] handle="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" host="ip-172-31-18-147" Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.404 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:40.514598 containerd[1932]: 2025-02-13 16:07:40.404 [INFO][5189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.132/26] IPv6=[] ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" HandleID="k8s-pod-network.8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:40.521496 containerd[1932]: 2025-02-13 16:07:40.411 [INFO][5163] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ffc1870-253d-4caf-9e31-d298393f248f", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"calico-apiserver-6c6548768f-mmqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali222c8120566", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:40.521496 containerd[1932]: 2025-02-13 16:07:40.411 [INFO][5163] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.132/32] ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:40.521496 containerd[1932]: 2025-02-13 16:07:40.411 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali222c8120566 ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:40.521496 containerd[1932]: 2025-02-13 16:07:40.429 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:40.521496 containerd[1932]: 2025-02-13 16:07:40.440 [INFO][5163] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ffc1870-253d-4caf-9e31-d298393f248f", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4", Pod:"calico-apiserver-6c6548768f-mmqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali222c8120566", MAC:"86:c9:11:f0:41:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:40.521496 containerd[1932]: 2025-02-13 16:07:40.504 [INFO][5163] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-mmqrp" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:40.579899 systemd-networkd[1832]: cali06d2ebbfcc0: Link UP Feb 13 16:07:40.584166 systemd-networkd[1832]: cali06d2ebbfcc0: Gained carrier Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.235 [INFO][5173] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0 coredns-7db6d8ff4d- kube-system adce654e-c14f-4b9f-9441-d91c1f7ad783 911 0 2025-02-13 16:06:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-147 coredns-7db6d8ff4d-rc9tv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali06d2ebbfcc0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.236 [INFO][5173] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.335 [INFO][5195] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" HandleID="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.369 [INFO][5195] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" HandleID="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030e870), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-147", "pod":"coredns-7db6d8ff4d-rc9tv", "timestamp":"2025-02-13 16:07:40.335357154 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.369 [INFO][5195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.404 [INFO][5195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.405 [INFO][5195] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.411 [INFO][5195] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.448 [INFO][5195] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.486 [INFO][5195] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.501 [INFO][5195] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.513 [INFO][5195] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.513 [INFO][5195] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.521 [INFO][5195] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.543 [INFO][5195] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.562 [INFO][5195] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.133/26] block=192.168.83.128/26 handle="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.562 [INFO][5195] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.133/26] handle="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" host="ip-172-31-18-147" Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.562 [INFO][5195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:40.633028 containerd[1932]: 2025-02-13 16:07:40.562 [INFO][5195] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.133/26] IPv6=[] ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" HandleID="k8s-pod-network.9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:40.635833 containerd[1932]: 2025-02-13 16:07:40.570 [INFO][5173] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"adce654e-c14f-4b9f-9441-d91c1f7ad783", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"coredns-7db6d8ff4d-rc9tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06d2ebbfcc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:40.635833 containerd[1932]: 2025-02-13 16:07:40.570 [INFO][5173] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.133/32] ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:40.635833 containerd[1932]: 2025-02-13 16:07:40.570 [INFO][5173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06d2ebbfcc0 ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:40.635833 containerd[1932]: 2025-02-13 16:07:40.586 [INFO][5173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:40.635833 containerd[1932]: 2025-02-13 16:07:40.590 [INFO][5173] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"adce654e-c14f-4b9f-9441-d91c1f7ad783", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c", Pod:"coredns-7db6d8ff4d-rc9tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06d2ebbfcc0", MAC:"72:78:bc:55:ed:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:40.635833 containerd[1932]: 2025-02-13 16:07:40.620 [INFO][5173] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rc9tv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:40.669244 containerd[1932]: time="2025-02-13T16:07:40.668755388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:40.669244 containerd[1932]: time="2025-02-13T16:07:40.668930744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:40.669244 containerd[1932]: time="2025-02-13T16:07:40.668970260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:40.671283 containerd[1932]: time="2025-02-13T16:07:40.670737980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:40.747731 containerd[1932]: time="2025-02-13T16:07:40.745087568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:40.747731 containerd[1932]: time="2025-02-13T16:07:40.745170488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:40.747731 containerd[1932]: time="2025-02-13T16:07:40.745207628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:40.747731 containerd[1932]: time="2025-02-13T16:07:40.745355408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:40.750126 systemd[1]: Started cri-containerd-8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4.scope - libcontainer container 8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4. Feb 13 16:07:40.820934 systemd[1]: Started cri-containerd-9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c.scope - libcontainer container 9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c. Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.693 [INFO][5220] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.693 [INFO][5220] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" iface="eth0" netns="/var/run/netns/cni-8e336097-5ebc-2380-47c9-64429e28d8ad" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.697 [INFO][5220] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" iface="eth0" netns="/var/run/netns/cni-8e336097-5ebc-2380-47c9-64429e28d8ad" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.703 [INFO][5220] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" iface="eth0" netns="/var/run/netns/cni-8e336097-5ebc-2380-47c9-64429e28d8ad" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.703 [INFO][5220] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.703 [INFO][5220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.834 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.834 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.834 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.865 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.866 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.873 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:40.900510 containerd[1932]: 2025-02-13 16:07:40.890 [INFO][5220] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:40.902497 containerd[1932]: time="2025-02-13T16:07:40.901636569Z" level=info msg="TearDown network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\" successfully" Feb 13 16:07:40.902497 containerd[1932]: time="2025-02-13T16:07:40.901694349Z" level=info msg="StopPodSandbox for \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\" returns successfully" Feb 13 16:07:40.905726 containerd[1932]: time="2025-02-13T16:07:40.902972757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-2jq2g,Uid:e16e8353-7776-4b1a-a927-2ad9f0aeb669,Namespace:calico-apiserver,Attempt:1,}" Feb 13 16:07:40.908860 systemd-networkd[1832]: calia27165a9ba0: Gained IPv6LL Feb 13 16:07:40.912044 systemd[1]: run-netns-cni\x2d8e336097\x2d5ebc\x2d2380\x2d47c9\x2d64429e28d8ad.mount: Deactivated successfully. Feb 13 16:07:41.035835 systemd-networkd[1832]: calic1453551974: Gained IPv6LL Feb 13 16:07:41.178500 containerd[1932]: time="2025-02-13T16:07:41.177564727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc9tv,Uid:adce654e-c14f-4b9f-9441-d91c1f7ad783,Namespace:kube-system,Attempt:1,} returns sandbox id \"9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c\"" Feb 13 16:07:41.182004 containerd[1932]: time="2025-02-13T16:07:41.181804567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-mmqrp,Uid:3ffc1870-253d-4caf-9e31-d298393f248f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4\"" Feb 13 16:07:41.217097 containerd[1932]: time="2025-02-13T16:07:41.217025623Z" level=info msg="CreateContainer within sandbox \"9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:07:41.291847 containerd[1932]: time="2025-02-13T16:07:41.290252107Z" level=info msg="CreateContainer within sandbox \"9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de9e9f991f8010c761b132cf187165a269d03f2d312b1507b48a3e37f7114ad1\"" Feb 13 16:07:41.296231 containerd[1932]: time="2025-02-13T16:07:41.296146699Z" level=info msg="StartContainer for \"de9e9f991f8010c761b132cf187165a269d03f2d312b1507b48a3e37f7114ad1\"" Feb 13 16:07:41.462382 systemd[1]: Started cri-containerd-de9e9f991f8010c761b132cf187165a269d03f2d312b1507b48a3e37f7114ad1.scope - libcontainer container de9e9f991f8010c761b132cf187165a269d03f2d312b1507b48a3e37f7114ad1. Feb 13 16:07:41.612837 systemd-networkd[1832]: cali222c8120566: Gained IPv6LL Feb 13 16:07:41.634145 containerd[1932]: time="2025-02-13T16:07:41.633284613Z" level=info msg="StartContainer for \"de9e9f991f8010c761b132cf187165a269d03f2d312b1507b48a3e37f7114ad1\" returns successfully" Feb 13 16:07:41.720863 systemd-networkd[1832]: califcdab7c831b: Link UP Feb 13 16:07:41.723799 systemd-networkd[1832]: califcdab7c831b: Gained carrier Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.250 [INFO][5327] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0 calico-apiserver-6c6548768f- calico-apiserver e16e8353-7776-4b1a-a927-2ad9f0aeb669 927 0 2025-02-13 16:07:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6548768f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-147 calico-apiserver-6c6548768f-2jq2g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcdab7c831b [] []}} ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.250 [INFO][5327] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.467 [INFO][5358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" HandleID="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.517 [INFO][5358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" HandleID="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000580a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-147", "pod":"calico-apiserver-6c6548768f-2jq2g", "timestamp":"2025-02-13 16:07:41.460973312 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.517 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.518 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.518 [INFO][5358] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.524 [INFO][5358] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.541 [INFO][5358] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.564 [INFO][5358] ipam/ipam.go 489: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.572 [INFO][5358] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.584 [INFO][5358] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.585 [INFO][5358] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.619 [INFO][5358] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.652 [INFO][5358] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.689 [INFO][5358] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.134/26] block=192.168.83.128/26 handle="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.689 [INFO][5358] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.134/26] handle="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" host="ip-172-31-18-147" Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.689 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:41.775312 containerd[1932]: 2025-02-13 16:07:41.689 [INFO][5358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.134/26] IPv6=[] ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" HandleID="k8s-pod-network.57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:41.777062 containerd[1932]: 2025-02-13 16:07:41.701 [INFO][5327] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e16e8353-7776-4b1a-a927-2ad9f0aeb669", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"calico-apiserver-6c6548768f-2jq2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdab7c831b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:41.777062 containerd[1932]: 2025-02-13 16:07:41.703 [INFO][5327] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.134/32] ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:41.777062 containerd[1932]: 2025-02-13 16:07:41.704 [INFO][5327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcdab7c831b ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:41.777062 containerd[1932]: 2025-02-13 16:07:41.727 [INFO][5327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:41.777062 containerd[1932]: 2025-02-13 16:07:41.730 [INFO][5327] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e16e8353-7776-4b1a-a927-2ad9f0aeb669", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c", Pod:"calico-apiserver-6c6548768f-2jq2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdab7c831b", MAC:"22:52:74:fb:ca:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:41.777062 containerd[1932]: 2025-02-13 16:07:41.758 [INFO][5327] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c" Namespace="calico-apiserver" Pod="calico-apiserver-6c6548768f-2jq2g" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:41.957734 containerd[1932]: time="2025-02-13T16:07:41.955933450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:41.957734 containerd[1932]: time="2025-02-13T16:07:41.956122666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:41.957734 containerd[1932]: time="2025-02-13T16:07:41.956156566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:41.963379 containerd[1932]: time="2025-02-13T16:07:41.956460190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:42.107281 systemd[1]: Started cri-containerd-57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c.scope - libcontainer container 57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c. Feb 13 16:07:42.507865 systemd-networkd[1832]: cali06d2ebbfcc0: Gained IPv6LL Feb 13 16:07:42.570126 containerd[1932]: time="2025-02-13T16:07:42.569734714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6548768f-2jq2g,Uid:e16e8353-7776-4b1a-a927-2ad9f0aeb669,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c\"" Feb 13 16:07:42.827924 systemd-networkd[1832]: califcdab7c831b: Gained IPv6LL Feb 13 16:07:43.094801 kubelet[3274]: I0213 16:07:43.092456 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rc9tv" podStartSLOduration=44.092435792 podStartE2EDuration="44.092435792s" podCreationTimestamp="2025-02-13 16:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:42.070735999 +0000 UTC m=+56.003306415" watchObservedRunningTime="2025-02-13 16:07:43.092435792 +0000 UTC m=+57.025006184" Feb 13 16:07:43.441605 containerd[1932]: time="2025-02-13T16:07:43.440593618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:43.443583 containerd[1932]: time="2025-02-13T16:07:43.443525878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 16:07:43.445938 containerd[1932]: time="2025-02-13T16:07:43.445881586Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:43.456136 containerd[1932]: time="2025-02-13T16:07:43.455418034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:43.457355 containerd[1932]: time="2025-02-13T16:07:43.456814450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.83223782s" Feb 13 16:07:43.457355 containerd[1932]: time="2025-02-13T16:07:43.456879202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 16:07:43.463099 containerd[1932]: time="2025-02-13T16:07:43.463000282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 16:07:43.505034 containerd[1932]: time="2025-02-13T16:07:43.504975934Z" level=info msg="CreateContainer within sandbox \"ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 16:07:43.598758 containerd[1932]: time="2025-02-13T16:07:43.597801059Z" level=info msg="CreateContainer within sandbox \"ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2982ba452ca3094020f0932e8f3ad606e5c8a4ebcb0ee7f871c402e4dc8374f0\"" Feb 13 16:07:43.600679 systemd[1]: Started sshd@12-172.31.18.147:22-139.178.68.195:42304.service - OpenSSH per-connection server daemon (139.178.68.195:42304). Feb 13 16:07:43.604115 containerd[1932]: time="2025-02-13T16:07:43.600791903Z" level=info msg="StartContainer for \"2982ba452ca3094020f0932e8f3ad606e5c8a4ebcb0ee7f871c402e4dc8374f0\"" Feb 13 16:07:43.765690 systemd[1]: Started cri-containerd-2982ba452ca3094020f0932e8f3ad606e5c8a4ebcb0ee7f871c402e4dc8374f0.scope - libcontainer container 2982ba452ca3094020f0932e8f3ad606e5c8a4ebcb0ee7f871c402e4dc8374f0. Feb 13 16:07:43.833987 sshd[5478]: Accepted publickey for core from 139.178.68.195 port 42304 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:43.838229 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:43.854607 systemd-logind[1913]: New session 13 of user core. Feb 13 16:07:43.865926 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:07:44.066296 containerd[1932]: time="2025-02-13T16:07:44.065524569Z" level=info msg="StartContainer for \"2982ba452ca3094020f0932e8f3ad606e5c8a4ebcb0ee7f871c402e4dc8374f0\" returns successfully" Feb 13 16:07:44.284578 sshd[5478]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:44.294037 systemd[1]: sshd@12-172.31.18.147:22-139.178.68.195:42304.service: Deactivated successfully. Feb 13 16:07:44.302070 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:07:44.306292 systemd-logind[1913]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:07:44.309822 systemd-logind[1913]: Removed session 13. Feb 13 16:07:45.589535 containerd[1932]: time="2025-02-13T16:07:45.589301401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:45.591735 containerd[1932]: time="2025-02-13T16:07:45.591655789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 16:07:45.595257 containerd[1932]: time="2025-02-13T16:07:45.594564973Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:45.601552 containerd[1932]: time="2025-02-13T16:07:45.601431505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.138346515s" Feb 13 16:07:45.601552 containerd[1932]: time="2025-02-13T16:07:45.601542169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 16:07:45.601827 containerd[1932]: time="2025-02-13T16:07:45.601754653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:45.605102 containerd[1932]: time="2025-02-13T16:07:45.604625389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 16:07:45.609459 containerd[1932]: time="2025-02-13T16:07:45.609194317Z" level=info msg="CreateContainer within sandbox \"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 16:07:45.651513 containerd[1932]: time="2025-02-13T16:07:45.650888437Z" level=info msg="CreateContainer within sandbox \"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c26b4a2726f10b600e35e293289db38c5c005ed39994c9ba0630f4e9f8623177\"" Feb 13 16:07:45.654256 containerd[1932]: time="2025-02-13T16:07:45.654098821Z" level=info msg="StartContainer for \"c26b4a2726f10b600e35e293289db38c5c005ed39994c9ba0630f4e9f8623177\"" Feb 13 16:07:45.725775 systemd[1]: Started cri-containerd-c26b4a2726f10b600e35e293289db38c5c005ed39994c9ba0630f4e9f8623177.scope - libcontainer container c26b4a2726f10b600e35e293289db38c5c005ed39994c9ba0630f4e9f8623177. Feb 13 16:07:45.792789 containerd[1932]: time="2025-02-13T16:07:45.792704918Z" level=info msg="StartContainer for \"c26b4a2726f10b600e35e293289db38c5c005ed39994c9ba0630f4e9f8623177\" returns successfully" Feb 13 16:07:46.183964 kubelet[3274]: I0213 16:07:46.183869 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59f7b46b7f-wxpjk" podStartSLOduration=30.346492471 podStartE2EDuration="35.183844643s" podCreationTimestamp="2025-02-13 16:07:11 +0000 UTC" firstStartedPulling="2025-02-13 16:07:38.622678734 +0000 UTC m=+52.555249126" lastFinishedPulling="2025-02-13 16:07:43.460030906 +0000 UTC m=+57.392601298" observedRunningTime="2025-02-13 16:07:45.093242014 +0000 UTC m=+59.025812394" watchObservedRunningTime="2025-02-13 16:07:46.183844643 +0000 UTC m=+60.116415047" Feb 13 16:07:46.302492 containerd[1932]: time="2025-02-13T16:07:46.302407644Z" level=info msg="StopPodSandbox for \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\"" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.383 [WARNING][5600] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"adce654e-c14f-4b9f-9441-d91c1f7ad783", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c", Pod:"coredns-7db6d8ff4d-rc9tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06d2ebbfcc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.383 [INFO][5600] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.383 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" iface="eth0" netns="" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.383 [INFO][5600] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.383 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.443 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.444 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.445 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.466 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.466 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.472 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:46.482746 containerd[1932]: 2025-02-13 16:07:46.477 [INFO][5600] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.482746 containerd[1932]: time="2025-02-13T16:07:46.481991221Z" level=info msg="TearDown network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\" successfully" Feb 13 16:07:46.482746 containerd[1932]: time="2025-02-13T16:07:46.482072509Z" level=info msg="StopPodSandbox for \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\" returns successfully" Feb 13 16:07:46.485430 containerd[1932]: time="2025-02-13T16:07:46.483053629Z" level=info msg="RemovePodSandbox for \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\"" Feb 13 16:07:46.485430 containerd[1932]: time="2025-02-13T16:07:46.483109657Z" level=info msg="Forcibly stopping sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\"" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.556 [WARNING][5627] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"adce654e-c14f-4b9f-9441-d91c1f7ad783", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"9b91a8562fd35829ce4744705ec24f4ab2e04e55cac37375ac8b046a2303bd4c", Pod:"coredns-7db6d8ff4d-rc9tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06d2ebbfcc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.557 [INFO][5627] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.557 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" iface="eth0" netns="" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.557 [INFO][5627] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.557 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.606 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.607 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.607 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.625 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.625 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" HandleID="k8s-pod-network.23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--rc9tv-eth0" Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.630 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:46.636292 containerd[1932]: 2025-02-13 16:07:46.634 [INFO][5627] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241" Feb 13 16:07:46.637572 containerd[1932]: time="2025-02-13T16:07:46.636890822Z" level=info msg="TearDown network for sandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\" successfully" Feb 13 16:07:46.645870 containerd[1932]: time="2025-02-13T16:07:46.645790838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:46.646026 containerd[1932]: time="2025-02-13T16:07:46.645909458Z" level=info msg="RemovePodSandbox \"23aa3a29d1dcf56792f376549bac9a154355f35aa51930d97a45db3c2b6ea241\" returns successfully" Feb 13 16:07:46.646785 containerd[1932]: time="2025-02-13T16:07:46.646741322Z" level=info msg="StopPodSandbox for \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\"" Feb 13 16:07:46.720747 ntpd[1903]: Listen normally on 8 vxlan.calico 192.168.83.128:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 8 vxlan.calico 192.168.83.128:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 9 vxlan.calico [fe80::6484:64ff:fe11:e2b8%4]:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 10 calidb262736052 [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 11 calia27165a9ba0 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 12 calic1453551974 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 13 cali222c8120566 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 14 cali06d2ebbfcc0 [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 16:07:46.721605 ntpd[1903]: 13 Feb 16:07:46 ntpd[1903]: Listen normally on 15 califcdab7c831b [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 16:07:46.720880 ntpd[1903]: Listen normally on 9 vxlan.calico [fe80::6484:64ff:fe11:e2b8%4]:123 Feb 13 16:07:46.720964 ntpd[1903]: Listen normally on 10 calidb262736052 [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 16:07:46.721072 ntpd[1903]: Listen normally on 11 calia27165a9ba0 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 16:07:46.721148 ntpd[1903]: Listen normally on 12 calic1453551974 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 16:07:46.721218 ntpd[1903]: Listen normally on 13 cali222c8120566 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 16:07:46.721284 ntpd[1903]: Listen normally on 14 cali06d2ebbfcc0 [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 16:07:46.721351 ntpd[1903]: Listen normally on 15 califcdab7c831b [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.724 [WARNING][5651] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0", GenerateName:"calico-kube-controllers-59f7b46b7f-", Namespace:"calico-system", SelfLink:"", UID:"7ab6612a-9cb7-4dd0-8057-abec92b485a5", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f7b46b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3", Pod:"calico-kube-controllers-59f7b46b7f-wxpjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb262736052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.724 [INFO][5651] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.725 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" iface="eth0" netns="" Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.725 [INFO][5651] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.725 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.768 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.769 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.769 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.785 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.785 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.789 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:46.795236 containerd[1932]: 2025-02-13 16:07:46.792 [INFO][5651] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.796129 containerd[1932]: time="2025-02-13T16:07:46.795310599Z" level=info msg="TearDown network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\" successfully" Feb 13 16:07:46.796129 containerd[1932]: time="2025-02-13T16:07:46.795366507Z" level=info msg="StopPodSandbox for \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\" returns successfully" Feb 13 16:07:46.796352 containerd[1932]: time="2025-02-13T16:07:46.796300539Z" level=info msg="RemovePodSandbox for \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\"" Feb 13 16:07:46.796439 containerd[1932]: time="2025-02-13T16:07:46.796360851Z" level=info msg="Forcibly stopping sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\"" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.883 [WARNING][5675] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0", GenerateName:"calico-kube-controllers-59f7b46b7f-", Namespace:"calico-system", SelfLink:"", UID:"7ab6612a-9cb7-4dd0-8057-abec92b485a5", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f7b46b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"ad35200e3b572266866fa5855fc075f9c2984f4b05be9f421e7d0bba612f9af3", Pod:"calico-kube-controllers-59f7b46b7f-wxpjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb262736052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.884 [INFO][5675] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.884 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" iface="eth0" netns="" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.884 [INFO][5675] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.884 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.954 [INFO][5684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.955 [INFO][5684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.955 [INFO][5684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.970 [WARNING][5684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.970 [INFO][5684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" HandleID="k8s-pod-network.e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--59f7b46b7f--wxpjk-eth0" Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.974 [INFO][5684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:46.980993 containerd[1932]: 2025-02-13 16:07:46.977 [INFO][5675] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5" Feb 13 16:07:46.981997 containerd[1932]: time="2025-02-13T16:07:46.981048195Z" level=info msg="TearDown network for sandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\" successfully" Feb 13 16:07:46.999578 containerd[1932]: time="2025-02-13T16:07:46.999407668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:46.999578 containerd[1932]: time="2025-02-13T16:07:46.999545932Z" level=info msg="RemovePodSandbox \"e5735603db6c8434e9a9f257b01efced04d8589af12fb8192d36ef016b7245a5\" returns successfully" Feb 13 16:07:47.000339 containerd[1932]: time="2025-02-13T16:07:47.000292524Z" level=info msg="StopPodSandbox for \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\"" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.104 [WARNING][5703] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ecaa1f9-7c73-451d-b745-a3b442214800", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b", Pod:"csi-node-driver-gphn8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1453551974", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.104 [INFO][5703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.104 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" iface="eth0" netns="" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.104 [INFO][5703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.104 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.159 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.160 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.160 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.173 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.173 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.176 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:47.182670 containerd[1932]: 2025-02-13 16:07:47.179 [INFO][5703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.182670 containerd[1932]: time="2025-02-13T16:07:47.182624880Z" level=info msg="TearDown network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\" successfully" Feb 13 16:07:47.184181 containerd[1932]: time="2025-02-13T16:07:47.182688936Z" level=info msg="StopPodSandbox for \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\" returns successfully" Feb 13 16:07:47.186550 containerd[1932]: time="2025-02-13T16:07:47.184876416Z" level=info msg="RemovePodSandbox for \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\"" Feb 13 16:07:47.186550 containerd[1932]: time="2025-02-13T16:07:47.184943448Z" level=info msg="Forcibly stopping sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\"" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.283 [WARNING][5733] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ecaa1f9-7c73-451d-b745-a3b442214800", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b", Pod:"csi-node-driver-gphn8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1453551974", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.284 [INFO][5733] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.284 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" iface="eth0" netns="" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.284 [INFO][5733] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.284 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.340 [INFO][5741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.341 [INFO][5741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.341 [INFO][5741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.356 [WARNING][5741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.356 [INFO][5741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" HandleID="k8s-pod-network.7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Workload="ip--172--31--18--147-k8s-csi--node--driver--gphn8-eth0" Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.360 [INFO][5741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:47.366356 containerd[1932]: 2025-02-13 16:07:47.363 [INFO][5733] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847" Feb 13 16:07:47.367617 containerd[1932]: time="2025-02-13T16:07:47.367092205Z" level=info msg="TearDown network for sandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\" successfully" Feb 13 16:07:47.376066 containerd[1932]: time="2025-02-13T16:07:47.375898045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:47.376709 containerd[1932]: time="2025-02-13T16:07:47.376084561Z" level=info msg="RemovePodSandbox \"7f76b399d640792ebb41e1ff79b4f5dc999a12db6e13ff9fdd2e14ca6d5f5847\" returns successfully" Feb 13 16:07:47.377295 containerd[1932]: time="2025-02-13T16:07:47.377228989Z" level=info msg="StopPodSandbox for \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\"" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.458 [WARNING][5759] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ffc1870-253d-4caf-9e31-d298393f248f", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4", Pod:"calico-apiserver-6c6548768f-mmqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali222c8120566", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.458 [INFO][5759] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.458 [INFO][5759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" iface="eth0" netns="" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.458 [INFO][5759] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.458 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.538 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.539 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.539 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.564 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.565 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.571 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:47.581991 containerd[1932]: 2025-02-13 16:07:47.576 [INFO][5759] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.584009 containerd[1932]: time="2025-02-13T16:07:47.582087206Z" level=info msg="TearDown network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\" successfully" Feb 13 16:07:47.584009 containerd[1932]: time="2025-02-13T16:07:47.582154694Z" level=info msg="StopPodSandbox for \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\" returns successfully" Feb 13 16:07:47.584009 containerd[1932]: time="2025-02-13T16:07:47.583534994Z" level=info msg="RemovePodSandbox for \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\"" Feb 13 16:07:47.584009 containerd[1932]: time="2025-02-13T16:07:47.583593458Z" level=info msg="Forcibly stopping sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\"" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.734 [WARNING][5783] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ffc1870-253d-4caf-9e31-d298393f248f", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4", Pod:"calico-apiserver-6c6548768f-mmqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali222c8120566", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.736 [INFO][5783] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.736 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" iface="eth0" netns="" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.736 [INFO][5783] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.736 [INFO][5783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.807 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.807 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.807 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.827 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.827 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" HandleID="k8s-pod-network.01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--mmqrp-eth0" Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.834 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:47.845835 containerd[1932]: 2025-02-13 16:07:47.839 [INFO][5783] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa" Feb 13 16:07:47.848633 containerd[1932]: time="2025-02-13T16:07:47.848022388Z" level=info msg="TearDown network for sandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\" successfully" Feb 13 16:07:47.855672 containerd[1932]: time="2025-02-13T16:07:47.855531064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:47.856117 containerd[1932]: time="2025-02-13T16:07:47.855970876Z" level=info msg="RemovePodSandbox \"01ab462b3f197530fcf0a8bb80216b571d38fa58d64c65e1319b85dd28e91efa\" returns successfully" Feb 13 16:07:47.858423 containerd[1932]: time="2025-02-13T16:07:47.858254428Z" level=info msg="StopPodSandbox for \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\"" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.023 [WARNING][5811] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01de8ba8-2672-4a12-9409-8ec82372a335", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0", Pod:"coredns-7db6d8ff4d-dg565", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia27165a9ba0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.023 [INFO][5811] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.023 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" iface="eth0" netns="" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.023 [INFO][5811] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.023 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.142 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.144 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.144 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.160 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.160 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.164 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:48.176000 containerd[1932]: 2025-02-13 16:07:48.170 [INFO][5811] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.176000 containerd[1932]: time="2025-02-13T16:07:48.175964449Z" level=info msg="TearDown network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\" successfully" Feb 13 16:07:48.178887 containerd[1932]: time="2025-02-13T16:07:48.176023105Z" level=info msg="StopPodSandbox for \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\" returns successfully" Feb 13 16:07:48.178887 containerd[1932]: time="2025-02-13T16:07:48.178279297Z" level=info msg="RemovePodSandbox for \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\"" Feb 13 16:07:48.178887 containerd[1932]: time="2025-02-13T16:07:48.178335601Z" level=info msg="Forcibly stopping sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\"" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.323 [WARNING][5835] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"01de8ba8-2672-4a12-9409-8ec82372a335", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"11613cb46c6379ed0fd98e5d5176e208f2cd9108cbd231e78c0b6d571d610ac0", Pod:"coredns-7db6d8ff4d-dg565", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia27165a9ba0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.324 [INFO][5835] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.324 [INFO][5835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" iface="eth0" netns="" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.324 [INFO][5835] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.325 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.395 [INFO][5841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.395 [INFO][5841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.395 [INFO][5841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.414 [WARNING][5841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.414 [INFO][5841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" HandleID="k8s-pod-network.46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Workload="ip--172--31--18--147-k8s-coredns--7db6d8ff4d--dg565-eth0" Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.421 [INFO][5841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:48.433418 containerd[1932]: 2025-02-13 16:07:48.426 [INFO][5835] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894" Feb 13 16:07:48.433418 containerd[1932]: time="2025-02-13T16:07:48.433050915Z" level=info msg="TearDown network for sandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\" successfully" Feb 13 16:07:48.443265 containerd[1932]: time="2025-02-13T16:07:48.441531231Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:48.443265 containerd[1932]: time="2025-02-13T16:07:48.441662715Z" level=info msg="RemovePodSandbox \"46626bca25a9a581b7ec751d17475f8b276e8745df156960b55ad4ff80ccf894\" returns successfully" Feb 13 16:07:48.443265 containerd[1932]: time="2025-02-13T16:07:48.442388835Z" level=info msg="StopPodSandbox for \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\"" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.596 [WARNING][5859] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e16e8353-7776-4b1a-a927-2ad9f0aeb669", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c", Pod:"calico-apiserver-6c6548768f-2jq2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdab7c831b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.598 [INFO][5859] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.598 [INFO][5859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" iface="eth0" netns="" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.598 [INFO][5859] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.598 [INFO][5859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.666 [INFO][5865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.667 [INFO][5865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.667 [INFO][5865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.689 [WARNING][5865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.690 [INFO][5865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.695 [INFO][5865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:48.706638 containerd[1932]: 2025-02-13 16:07:48.701 [INFO][5859] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.708728 containerd[1932]: time="2025-02-13T16:07:48.708656332Z" level=info msg="TearDown network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\" successfully" Feb 13 16:07:48.708728 containerd[1932]: time="2025-02-13T16:07:48.708724492Z" level=info msg="StopPodSandbox for \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\" returns successfully" Feb 13 16:07:48.711529 containerd[1932]: time="2025-02-13T16:07:48.709738612Z" level=info msg="RemovePodSandbox for \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\"" Feb 13 16:07:48.711529 containerd[1932]: time="2025-02-13T16:07:48.709798204Z" level=info msg="Forcibly stopping sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\"" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.825 [WARNING][5884] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0", GenerateName:"calico-apiserver-6c6548768f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e16e8353-7776-4b1a-a927-2ad9f0aeb669", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6548768f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c", Pod:"calico-apiserver-6c6548768f-2jq2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdab7c831b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.825 [INFO][5884] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.825 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" iface="eth0" netns="" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.825 [INFO][5884] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.826 [INFO][5884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.892 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.893 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.893 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.912 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.912 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" HandleID="k8s-pod-network.5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Workload="ip--172--31--18--147-k8s-calico--apiserver--6c6548768f--2jq2g-eth0" Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.917 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:07:48.927014 containerd[1932]: 2025-02-13 16:07:48.921 [INFO][5884] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161" Feb 13 16:07:48.928678 containerd[1932]: time="2025-02-13T16:07:48.927083165Z" level=info msg="TearDown network for sandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\" successfully" Feb 13 16:07:48.934511 containerd[1932]: time="2025-02-13T16:07:48.934006217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:48.934511 containerd[1932]: time="2025-02-13T16:07:48.934164509Z" level=info msg="RemovePodSandbox \"5b3e6f265a0a2183652f211920dce774c504bc8f7dafb63342e6a0c285d7b161\" returns successfully" Feb 13 16:07:49.333744 systemd[1]: Started sshd@13-172.31.18.147:22-139.178.68.195:42582.service - OpenSSH per-connection server daemon (139.178.68.195:42582). Feb 13 16:07:49.384304 containerd[1932]: time="2025-02-13T16:07:49.383883327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:49.387890 containerd[1932]: time="2025-02-13T16:07:49.387814275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 16:07:49.390038 containerd[1932]: time="2025-02-13T16:07:49.389932995Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:49.395301 containerd[1932]: time="2025-02-13T16:07:49.395174607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:49.397254 containerd[1932]: time="2025-02-13T16:07:49.396927675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.79223691s" Feb 13 16:07:49.397254 containerd[1932]: time="2025-02-13T16:07:49.396994707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 16:07:49.400955 containerd[1932]: time="2025-02-13T16:07:49.400658667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 16:07:49.406076 containerd[1932]: time="2025-02-13T16:07:49.405627171Z" level=info msg="CreateContainer within sandbox \"8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 16:07:49.444226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831926919.mount: Deactivated successfully. Feb 13 16:07:49.445176 containerd[1932]: time="2025-02-13T16:07:49.444974788Z" level=info msg="CreateContainer within sandbox \"8a54da9712f763c5428343510e20c643a31f2f308b6d00d2e9aa2cfb0393dfe4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"06c5a4c77f7277d295c0b94c9a5a2d00a822c313cc0f4970f9306bb00feda829\"" Feb 13 16:07:49.450157 containerd[1932]: time="2025-02-13T16:07:49.449622964Z" level=info msg="StartContainer for \"06c5a4c77f7277d295c0b94c9a5a2d00a822c313cc0f4970f9306bb00feda829\"" Feb 13 16:07:49.533944 systemd[1]: Started cri-containerd-06c5a4c77f7277d295c0b94c9a5a2d00a822c313cc0f4970f9306bb00feda829.scope - libcontainer container 06c5a4c77f7277d295c0b94c9a5a2d00a822c313cc0f4970f9306bb00feda829. Feb 13 16:07:49.574539 sshd[5898]: Accepted publickey for core from 139.178.68.195 port 42582 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:49.578276 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:49.588167 systemd-logind[1913]: New session 14 of user core. Feb 13 16:07:49.596764 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:07:49.663197 containerd[1932]: time="2025-02-13T16:07:49.663117641Z" level=info msg="StartContainer for \"06c5a4c77f7277d295c0b94c9a5a2d00a822c313cc0f4970f9306bb00feda829\" returns successfully" Feb 13 16:07:49.830040 containerd[1932]: time="2025-02-13T16:07:49.828897366Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:49.831410 containerd[1932]: time="2025-02-13T16:07:49.831299754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 16:07:49.846696 containerd[1932]: time="2025-02-13T16:07:49.846420426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 445.672227ms" Feb 13 16:07:49.846696 containerd[1932]: time="2025-02-13T16:07:49.846563418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 16:07:49.854242 containerd[1932]: time="2025-02-13T16:07:49.851356410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 16:07:49.862535 containerd[1932]: time="2025-02-13T16:07:49.862432266Z" level=info msg="CreateContainer within sandbox \"57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 16:07:49.906556 containerd[1932]: time="2025-02-13T16:07:49.906068958Z" level=info msg="CreateContainer within sandbox \"57b1c95c031b1960625965f7378f97bc0157fa3d7bcb8d481869feba17ee629c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"17761c0c6475b7bcd0ab053e98e5b2147fbe04677561deb6a8d9de825f871c81\"" Feb 13 16:07:49.913982 containerd[1932]: time="2025-02-13T16:07:49.910787610Z" level=info msg="StartContainer for \"17761c0c6475b7bcd0ab053e98e5b2147fbe04677561deb6a8d9de825f871c81\"" Feb 13 16:07:49.998746 sshd[5898]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:50.004157 systemd[1]: Started cri-containerd-17761c0c6475b7bcd0ab053e98e5b2147fbe04677561deb6a8d9de825f871c81.scope - libcontainer container 17761c0c6475b7bcd0ab053e98e5b2147fbe04677561deb6a8d9de825f871c81. Feb 13 16:07:50.008730 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:07:50.016691 systemd[1]: sshd@13-172.31.18.147:22-139.178.68.195:42582.service: Deactivated successfully. Feb 13 16:07:50.033879 systemd-logind[1913]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:07:50.038549 systemd-logind[1913]: Removed session 14. Feb 13 16:07:50.112850 containerd[1932]: time="2025-02-13T16:07:50.112670175Z" level=info msg="StartContainer for \"17761c0c6475b7bcd0ab053e98e5b2147fbe04677561deb6a8d9de825f871c81\" returns successfully" Feb 13 16:07:50.227287 kubelet[3274]: I0213 16:07:50.226339 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c6548768f-mmqrp" podStartSLOduration=32.027381404 podStartE2EDuration="40.226307092s" podCreationTimestamp="2025-02-13 16:07:10 +0000 UTC" firstStartedPulling="2025-02-13 16:07:41.200584423 +0000 UTC m=+55.133154815" lastFinishedPulling="2025-02-13 16:07:49.399510099 +0000 UTC m=+63.332080503" observedRunningTime="2025-02-13 16:07:50.186432879 +0000 UTC m=+64.119003259" watchObservedRunningTime="2025-02-13 16:07:50.226307092 +0000 UTC m=+64.158877520" Feb 13 16:07:50.442294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231360598.mount: Deactivated successfully. Feb 13 16:07:51.168139 kubelet[3274]: I0213 16:07:51.167662 3274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:07:51.168139 kubelet[3274]: I0213 16:07:51.167662 3274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:07:51.591780 containerd[1932]: time="2025-02-13T16:07:51.591356958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:51.594967 containerd[1932]: time="2025-02-13T16:07:51.594840318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 16:07:51.598255 containerd[1932]: time="2025-02-13T16:07:51.598175586Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:51.611538 containerd[1932]: time="2025-02-13T16:07:51.609911610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:51.617066 containerd[1932]: time="2025-02-13T16:07:51.616969878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.765532372s" Feb 13 16:07:51.617066 containerd[1932]: time="2025-02-13T16:07:51.617056446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 16:07:51.636954 containerd[1932]: time="2025-02-13T16:07:51.636771115Z" level=info msg="CreateContainer within sandbox \"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 16:07:51.680533 containerd[1932]: time="2025-02-13T16:07:51.679289347Z" level=info msg="CreateContainer within sandbox \"138d2707524744c4d91e960c6538baa40b21fe8fe26b5910602a99d4c118087b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0e417f09c6263e712dea2edc11acc3f30d5a0f836a5e66a1443dbbd24968b2fb\"" Feb 13 16:07:51.687560 containerd[1932]: time="2025-02-13T16:07:51.685786747Z" level=info msg="StartContainer for \"0e417f09c6263e712dea2edc11acc3f30d5a0f836a5e66a1443dbbd24968b2fb\"" Feb 13 16:07:51.834854 systemd[1]: Started cri-containerd-0e417f09c6263e712dea2edc11acc3f30d5a0f836a5e66a1443dbbd24968b2fb.scope - libcontainer container 0e417f09c6263e712dea2edc11acc3f30d5a0f836a5e66a1443dbbd24968b2fb. Feb 13 16:07:51.998996 containerd[1932]: time="2025-02-13T16:07:51.997908788Z" level=info msg="StartContainer for \"0e417f09c6263e712dea2edc11acc3f30d5a0f836a5e66a1443dbbd24968b2fb\" returns successfully" Feb 13 16:07:52.206117 kubelet[3274]: I0213 16:07:52.206019 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c6548768f-2jq2g" podStartSLOduration=34.933704377 podStartE2EDuration="42.205995245s" podCreationTimestamp="2025-02-13 16:07:10 +0000 UTC" firstStartedPulling="2025-02-13 16:07:42.578144842 +0000 UTC m=+56.510715234" lastFinishedPulling="2025-02-13 16:07:49.850435722 +0000 UTC m=+63.783006102" observedRunningTime="2025-02-13 16:07:50.230773168 +0000 UTC m=+64.163343572" watchObservedRunningTime="2025-02-13 16:07:52.205995245 +0000 UTC m=+66.138565637" Feb 13 16:07:52.650181 kubelet[3274]: I0213 16:07:52.650127 3274 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 16:07:52.650181 kubelet[3274]: I0213 16:07:52.650180 3274 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 16:07:55.041073 systemd[1]: Started sshd@14-172.31.18.147:22-139.178.68.195:42588.service - OpenSSH per-connection server daemon (139.178.68.195:42588). Feb 13 16:07:55.231075 sshd[6076]: Accepted publickey for core from 139.178.68.195 port 42588 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:55.233989 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:55.243886 systemd-logind[1913]: New session 15 of user core. Feb 13 16:07:55.251796 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:07:55.512181 sshd[6076]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:55.517546 systemd[1]: sshd@14-172.31.18.147:22-139.178.68.195:42588.service: Deactivated successfully. Feb 13 16:07:55.521857 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:07:55.525689 systemd-logind[1913]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:07:55.528537 systemd-logind[1913]: Removed session 15. Feb 13 16:07:58.443528 kubelet[3274]: I0213 16:07:58.441773 3274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gphn8" podStartSLOduration=35.477998784 podStartE2EDuration="47.441738636s" podCreationTimestamp="2025-02-13 16:07:11 +0000 UTC" firstStartedPulling="2025-02-13 16:07:39.665953759 +0000 UTC m=+53.598524139" lastFinishedPulling="2025-02-13 16:07:51.629693611 +0000 UTC m=+65.562263991" observedRunningTime="2025-02-13 16:07:52.208400105 +0000 UTC m=+66.140970521" watchObservedRunningTime="2025-02-13 16:07:58.441738636 +0000 UTC m=+72.374309028" Feb 13 16:08:00.562092 systemd[1]: Started sshd@15-172.31.18.147:22-139.178.68.195:58378.service - OpenSSH per-connection server daemon (139.178.68.195:58378). Feb 13 16:08:00.754546 sshd[6121]: Accepted publickey for core from 139.178.68.195 port 58378 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:00.759065 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:00.766822 systemd-logind[1913]: New session 16 of user core. Feb 13 16:08:00.775809 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:08:01.077218 sshd[6121]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:01.084206 systemd[1]: sshd@15-172.31.18.147:22-139.178.68.195:58378.service: Deactivated successfully. Feb 13 16:08:01.088633 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:08:01.090679 systemd-logind[1913]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:08:01.093637 systemd-logind[1913]: Removed session 16. Feb 13 16:08:01.118074 systemd[1]: Started sshd@16-172.31.18.147:22-139.178.68.195:58388.service - OpenSSH per-connection server daemon (139.178.68.195:58388). Feb 13 16:08:01.309455 sshd[6134]: Accepted publickey for core from 139.178.68.195 port 58388 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:01.312724 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:01.323418 systemd-logind[1913]: New session 17 of user core. Feb 13 16:08:01.330777 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:08:01.856166 sshd[6134]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:01.863569 systemd[1]: sshd@16-172.31.18.147:22-139.178.68.195:58388.service: Deactivated successfully. Feb 13 16:08:01.870887 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:08:01.875378 systemd-logind[1913]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:08:01.901547 systemd[1]: Started sshd@17-172.31.18.147:22-139.178.68.195:58402.service - OpenSSH per-connection server daemon (139.178.68.195:58402). Feb 13 16:08:01.904380 systemd-logind[1913]: Removed session 17. Feb 13 16:08:02.096559 sshd[6144]: Accepted publickey for core from 139.178.68.195 port 58402 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:02.100416 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:02.114856 systemd-logind[1913]: New session 18 of user core. Feb 13 16:08:02.121833 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:08:05.949376 sshd[6144]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:05.962172 systemd[1]: sshd@17-172.31.18.147:22-139.178.68.195:58402.service: Deactivated successfully. Feb 13 16:08:05.971421 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:08:05.974623 systemd[1]: session-18.scope: Consumed 1.148s CPU time. Feb 13 16:08:06.001114 systemd-logind[1913]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:08:06.011658 systemd[1]: Started sshd@18-172.31.18.147:22-139.178.68.195:58414.service - OpenSSH per-connection server daemon (139.178.68.195:58414). Feb 13 16:08:06.015956 systemd-logind[1913]: Removed session 18. Feb 13 16:08:06.226144 sshd[6164]: Accepted publickey for core from 139.178.68.195 port 58414 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:06.234329 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:06.247177 systemd-logind[1913]: New session 19 of user core. Feb 13 16:08:06.256812 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:08:06.928066 sshd[6164]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:06.939560 systemd[1]: sshd@18-172.31.18.147:22-139.178.68.195:58414.service: Deactivated successfully. Feb 13 16:08:06.946950 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:08:06.953266 systemd-logind[1913]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:08:06.976231 systemd[1]: Started sshd@19-172.31.18.147:22-139.178.68.195:56932.service - OpenSSH per-connection server daemon (139.178.68.195:56932). Feb 13 16:08:06.981411 systemd-logind[1913]: Removed session 19. Feb 13 16:08:07.176364 sshd[6176]: Accepted publickey for core from 139.178.68.195 port 56932 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:07.180976 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:07.192095 systemd-logind[1913]: New session 20 of user core. Feb 13 16:08:07.199996 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:08:07.511789 sshd[6176]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:07.520442 systemd[1]: sshd@19-172.31.18.147:22-139.178.68.195:56932.service: Deactivated successfully. Feb 13 16:08:07.527086 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:08:07.529875 systemd-logind[1913]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:08:07.534360 systemd-logind[1913]: Removed session 20. Feb 13 16:08:12.560790 systemd[1]: Started sshd@20-172.31.18.147:22-139.178.68.195:56934.service - OpenSSH per-connection server daemon (139.178.68.195:56934). Feb 13 16:08:12.755176 sshd[6190]: Accepted publickey for core from 139.178.68.195 port 56934 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:12.757237 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:12.767057 systemd-logind[1913]: New session 21 of user core. Feb 13 16:08:12.776178 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:08:13.076224 sshd[6190]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:13.086205 systemd[1]: sshd@20-172.31.18.147:22-139.178.68.195:56934.service: Deactivated successfully. Feb 13 16:08:13.092031 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:08:13.094238 systemd-logind[1913]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:08:13.098664 systemd-logind[1913]: Removed session 21. Feb 13 16:08:18.120110 systemd[1]: Started sshd@21-172.31.18.147:22-139.178.68.195:37380.service - OpenSSH per-connection server daemon (139.178.68.195:37380). Feb 13 16:08:18.305580 sshd[6213]: Accepted publickey for core from 139.178.68.195 port 37380 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:18.308745 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:18.318140 systemd-logind[1913]: New session 22 of user core. Feb 13 16:08:18.326777 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:08:18.573655 sshd[6213]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:18.579661 systemd[1]: sshd@21-172.31.18.147:22-139.178.68.195:37380.service: Deactivated successfully. Feb 13 16:08:18.584602 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:08:18.589732 systemd-logind[1913]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:08:18.592186 systemd-logind[1913]: Removed session 22. Feb 13 16:08:20.167415 kubelet[3274]: I0213 16:08:20.167085 3274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:08:23.618089 systemd[1]: Started sshd@22-172.31.18.147:22-139.178.68.195:37396.service - OpenSSH per-connection server daemon (139.178.68.195:37396). Feb 13 16:08:23.794215 sshd[6230]: Accepted publickey for core from 139.178.68.195 port 37396 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:23.798175 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:23.810294 systemd-logind[1913]: New session 23 of user core. Feb 13 16:08:23.821877 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:08:24.070108 sshd[6230]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:24.077944 systemd-logind[1913]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:08:24.079387 systemd[1]: sshd@22-172.31.18.147:22-139.178.68.195:37396.service: Deactivated successfully. Feb 13 16:08:24.084683 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:08:24.087314 systemd-logind[1913]: Removed session 23. Feb 13 16:08:24.264426 kubelet[3274]: I0213 16:08:24.264371 3274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:08:29.119702 systemd[1]: Started sshd@23-172.31.18.147:22-139.178.68.195:50228.service - OpenSSH per-connection server daemon (139.178.68.195:50228). Feb 13 16:08:29.321600 sshd[6285]: Accepted publickey for core from 139.178.68.195 port 50228 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:29.325178 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:29.337960 systemd-logind[1913]: New session 24 of user core. Feb 13 16:08:29.343815 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:08:29.601414 sshd[6285]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:29.611133 systemd[1]: sshd@23-172.31.18.147:22-139.178.68.195:50228.service: Deactivated successfully. Feb 13 16:08:29.620362 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:08:29.628185 systemd-logind[1913]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:08:29.631680 systemd-logind[1913]: Removed session 24. Feb 13 16:08:34.643029 systemd[1]: Started sshd@24-172.31.18.147:22-139.178.68.195:50238.service - OpenSSH per-connection server daemon (139.178.68.195:50238). Feb 13 16:08:34.823744 sshd[6301]: Accepted publickey for core from 139.178.68.195 port 50238 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:34.827370 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:34.836823 systemd-logind[1913]: New session 25 of user core. Feb 13 16:08:34.841790 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:08:35.093866 sshd[6301]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:35.100203 systemd[1]: sshd@24-172.31.18.147:22-139.178.68.195:50238.service: Deactivated successfully. Feb 13 16:08:35.104746 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:08:35.111199 systemd-logind[1913]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:08:35.113646 systemd-logind[1913]: Removed session 25. Feb 13 16:08:40.132004 systemd[1]: Started sshd@25-172.31.18.147:22-139.178.68.195:48206.service - OpenSSH per-connection server daemon (139.178.68.195:48206). Feb 13 16:08:40.311707 sshd[6314]: Accepted publickey for core from 139.178.68.195 port 48206 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:40.314388 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:40.324541 systemd-logind[1913]: New session 26 of user core. Feb 13 16:08:40.327855 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:08:40.585834 sshd[6314]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:40.592909 systemd[1]: sshd@25-172.31.18.147:22-139.178.68.195:48206.service: Deactivated successfully. Feb 13 16:08:40.597736 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:08:40.600719 systemd-logind[1913]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:08:40.603579 systemd-logind[1913]: Removed session 26. Feb 13 16:08:53.945699 systemd[1]: cri-containerd-c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae.scope: Deactivated successfully. Feb 13 16:08:53.946185 systemd[1]: cri-containerd-c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae.scope: Consumed 5.688s CPU time, 22.3M memory peak, 0B memory swap peak. Feb 13 16:08:54.001819 containerd[1932]: time="2025-02-13T16:08:53.998924792Z" level=info msg="shim disconnected" id=c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae namespace=k8s.io Feb 13 16:08:54.001819 containerd[1932]: time="2025-02-13T16:08:53.999125336Z" level=warning msg="cleaning up after shim disconnected" id=c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae namespace=k8s.io Feb 13 16:08:54.001819 containerd[1932]: time="2025-02-13T16:08:53.999163796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:54.008955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae-rootfs.mount: Deactivated successfully. Feb 13 16:08:54.410826 kubelet[3274]: I0213 16:08:54.410689 3274 scope.go:117] "RemoveContainer" containerID="c777bdf237cbf9466ae857da243f1993f3e5171483332131973784000a5b77ae" Feb 13 16:08:54.422868 containerd[1932]: time="2025-02-13T16:08:54.422331582Z" level=info msg="CreateContainer within sandbox \"9a8ca022a3ad5bc97cb8efe1d0f87bf8062cc73ee5b9654556ab38a5b39bbe2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 16:08:54.474114 containerd[1932]: time="2025-02-13T16:08:54.473997367Z" level=info msg="CreateContainer within sandbox \"9a8ca022a3ad5bc97cb8efe1d0f87bf8062cc73ee5b9654556ab38a5b39bbe2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"158ab206616d28bc82e78f1d6f4fbbb2fde5fcea0587310b9832feaa3e8dce75\"" Feb 13 16:08:54.475801 containerd[1932]: time="2025-02-13T16:08:54.475723543Z" level=info msg="StartContainer for \"158ab206616d28bc82e78f1d6f4fbbb2fde5fcea0587310b9832feaa3e8dce75\"" Feb 13 16:08:54.538067 systemd[1]: Started cri-containerd-158ab206616d28bc82e78f1d6f4fbbb2fde5fcea0587310b9832feaa3e8dce75.scope - libcontainer container 158ab206616d28bc82e78f1d6f4fbbb2fde5fcea0587310b9832feaa3e8dce75. Feb 13 16:08:54.619360 containerd[1932]: time="2025-02-13T16:08:54.619294903Z" level=info msg="StartContainer for \"158ab206616d28bc82e78f1d6f4fbbb2fde5fcea0587310b9832feaa3e8dce75\" returns successfully"