Feb 13 16:04:24.211038 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 16:04:24.212256 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:34:20 -00 2025 Feb 13 16:04:24.212334 kernel: KASLR disabled due to lack of seed Feb 13 16:04:24.212352 kernel: efi: EFI v2.7 by EDK II Feb 13 16:04:24.212369 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 16:04:24.212385 kernel: ACPI: Early table checksum verification disabled Feb 13 16:04:24.212404 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 16:04:24.212421 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 16:04:24.212438 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 16:04:24.212453 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 16:04:24.212477 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 16:04:24.212493 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 16:04:24.212509 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 16:04:24.212525 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 16:04:24.212545 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 16:04:24.212568 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 16:04:24.212586 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 16:04:24.212603 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 16:04:24.212619 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 16:04:24.212636 kernel: printk: bootconsole [uart0] enabled Feb 13 16:04:24.212652 kernel: NUMA: Failed to initialise from firmware Feb 13 16:04:24.212669 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:04:24.212685 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 16:04:24.212702 kernel: Zone ranges: Feb 13 16:04:24.212718 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 16:04:24.212735 kernel: DMA32 empty Feb 13 16:04:24.212756 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 16:04:24.212773 kernel: Movable zone start for each node Feb 13 16:04:24.212790 kernel: Early memory node ranges Feb 13 16:04:24.212806 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 16:04:24.212823 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 16:04:24.212840 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 16:04:24.212857 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 16:04:24.212873 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 16:04:24.212890 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 16:04:24.212906 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 16:04:24.212922 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 16:04:24.212938 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:04:24.212959 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 16:04:24.212977 kernel: psci: probing for conduit method from ACPI. Feb 13 16:04:24.213000 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 16:04:24.213018 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 16:04:24.213036 kernel: psci: Trusted OS migration not required Feb 13 16:04:24.213058 kernel: psci: SMC Calling Convention v1.1 Feb 13 16:04:24.213076 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 16:04:24.213113 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 16:04:24.213132 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 16:04:24.213150 kernel: Detected PIPT I-cache on CPU0 Feb 13 16:04:24.213167 kernel: CPU features: detected: GIC system register CPU interface Feb 13 16:04:24.213184 kernel: CPU features: detected: Spectre-v2 Feb 13 16:04:24.213202 kernel: CPU features: detected: Spectre-v3a Feb 13 16:04:24.213219 kernel: CPU features: detected: Spectre-BHB Feb 13 16:04:24.213236 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 16:04:24.213253 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 16:04:24.213276 kernel: alternatives: applying boot alternatives Feb 13 16:04:24.213296 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:04:24.213315 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:04:24.213333 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:04:24.213351 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:04:24.213368 kernel: Fallback order for Node 0: 0 Feb 13 16:04:24.213386 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 16:04:24.213403 kernel: Policy zone: Normal Feb 13 16:04:24.213421 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:04:24.213438 kernel: software IO TLB: area num 2. Feb 13 16:04:24.213455 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 16:04:24.213479 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 16:04:24.213496 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:04:24.213514 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:04:24.213532 kernel: rcu: RCU event tracing is enabled. Feb 13 16:04:24.213550 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:04:24.213568 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:04:24.213586 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:04:24.213604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:04:24.213621 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:04:24.213638 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 16:04:24.213655 kernel: GICv3: 96 SPIs implemented Feb 13 16:04:24.213677 kernel: GICv3: 0 Extended SPIs implemented Feb 13 16:04:24.213695 kernel: Root IRQ handler: gic_handle_irq Feb 13 16:04:24.213712 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 16:04:24.213729 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 16:04:24.213747 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 16:04:24.213764 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 16:04:24.213781 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 16:04:24.213799 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 16:04:24.213816 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 16:04:24.213834 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 16:04:24.213851 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:04:24.213868 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 16:04:24.213891 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 16:04:24.213908 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 16:04:24.213927 kernel: Console: colour dummy device 80x25 Feb 13 16:04:24.213944 kernel: printk: console [tty1] enabled Feb 13 16:04:24.213962 kernel: ACPI: Core revision 20230628 Feb 13 16:04:24.213981 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 16:04:24.213999 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:04:24.214017 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:04:24.214034 kernel: landlock: Up and running. Feb 13 16:04:24.214056 kernel: SELinux: Initializing. Feb 13 16:04:24.214075 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:04:24.215230 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:04:24.215254 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:04:24.215275 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:04:24.215294 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:04:24.215314 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:04:24.215333 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 16:04:24.215351 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 16:04:24.215381 kernel: Remapping and enabling EFI services. Feb 13 16:04:24.215402 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:04:24.215420 kernel: Detected PIPT I-cache on CPU1 Feb 13 16:04:24.215438 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 16:04:24.215457 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 16:04:24.215493 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 16:04:24.215516 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:04:24.215535 kernel: SMP: Total of 2 processors activated. Feb 13 16:04:24.215554 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 16:04:24.215579 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 16:04:24.215599 kernel: CPU features: detected: CRC32 instructions Feb 13 16:04:24.215618 kernel: CPU: All CPU(s) started at EL1 Feb 13 16:04:24.215651 kernel: alternatives: applying system-wide alternatives Feb 13 16:04:24.215675 kernel: devtmpfs: initialized Feb 13 16:04:24.215694 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:04:24.215714 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:04:24.215733 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:04:24.215751 kernel: SMBIOS 3.0.0 present. Feb 13 16:04:24.215771 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 16:04:24.215796 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:04:24.215815 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 16:04:24.215834 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 16:04:24.215853 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 16:04:24.215871 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:04:24.215890 kernel: audit: type=2000 audit(0.341:1): state=initialized audit_enabled=0 res=1 Feb 13 16:04:24.215909 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:04:24.215933 kernel: cpuidle: using governor menu Feb 13 16:04:24.215952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 16:04:24.215971 kernel: ASID allocator initialised with 65536 entries Feb 13 16:04:24.215990 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:04:24.216009 kernel: Serial: AMBA PL011 UART driver Feb 13 16:04:24.216027 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 16:04:24.216047 kernel: Modules: 509040 pages in range for PLT usage Feb 13 16:04:24.216066 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:04:24.217149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:04:24.217206 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 16:04:24.217226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 16:04:24.217245 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:04:24.217263 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:04:24.217282 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 16:04:24.217301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 16:04:24.217319 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:04:24.217337 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:04:24.217356 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:04:24.217380 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:04:24.217399 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:04:24.217418 kernel: ACPI: Interpreter enabled Feb 13 16:04:24.217436 kernel: ACPI: Using GIC for interrupt routing Feb 13 16:04:24.217454 kernel: ACPI: MCFG table detected, 1 entries Feb 13 16:04:24.217473 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 16:04:24.217820 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:04:24.218030 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 16:04:24.218290 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 16:04:24.218497 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 16:04:24.218702 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 16:04:24.218728 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 16:04:24.218748 kernel: acpiphp: Slot [1] registered Feb 13 16:04:24.218767 kernel: acpiphp: Slot [2] registered Feb 13 16:04:24.218787 kernel: acpiphp: Slot [3] registered Feb 13 16:04:24.218805 kernel: acpiphp: Slot [4] registered Feb 13 16:04:24.218833 kernel: acpiphp: Slot [5] registered Feb 13 16:04:24.218852 kernel: acpiphp: Slot [6] registered Feb 13 16:04:24.218870 kernel: acpiphp: Slot [7] registered Feb 13 16:04:24.218889 kernel: acpiphp: Slot [8] registered Feb 13 16:04:24.218907 kernel: acpiphp: Slot [9] registered Feb 13 16:04:24.218926 kernel: acpiphp: Slot [10] registered Feb 13 16:04:24.218944 kernel: acpiphp: Slot [11] registered Feb 13 16:04:24.218962 kernel: acpiphp: Slot [12] registered Feb 13 16:04:24.218981 kernel: acpiphp: Slot [13] registered Feb 13 16:04:24.219004 kernel: acpiphp: Slot [14] registered Feb 13 16:04:24.219023 kernel: acpiphp: Slot [15] registered Feb 13 16:04:24.219041 kernel: acpiphp: Slot [16] registered Feb 13 16:04:24.219060 kernel: acpiphp: Slot [17] registered Feb 13 16:04:24.219078 kernel: acpiphp: Slot [18] registered Feb 13 16:04:24.221158 kernel: acpiphp: Slot [19] registered Feb 13 16:04:24.221187 kernel: acpiphp: Slot [20] registered Feb 13 16:04:24.221207 kernel: acpiphp: Slot [21] registered Feb 13 16:04:24.221225 kernel: acpiphp: Slot [22] registered Feb 13 16:04:24.223198 kernel: acpiphp: Slot [23] registered Feb 13 16:04:24.223229 kernel: acpiphp: Slot [24] registered Feb 13 16:04:24.223248 kernel: acpiphp: Slot [25] registered Feb 13 16:04:24.223267 kernel: acpiphp: Slot [26] registered Feb 13 16:04:24.223286 kernel: acpiphp: Slot [27] registered Feb 13 16:04:24.223304 kernel: acpiphp: Slot [28] registered Feb 13 16:04:24.223323 kernel: acpiphp: Slot [29] registered Feb 13 16:04:24.223341 kernel: acpiphp: Slot [30] registered Feb 13 16:04:24.223360 kernel: acpiphp: Slot [31] registered Feb 13 16:04:24.223378 kernel: PCI host bridge to bus 0000:00 Feb 13 16:04:24.223668 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 16:04:24.223866 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 16:04:24.224067 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 16:04:24.224940 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 16:04:24.225243 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 16:04:24.225479 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 16:04:24.225691 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 16:04:24.225914 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 16:04:24.226199 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 16:04:24.226428 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:04:24.226664 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 16:04:24.226886 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 16:04:24.228212 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 16:04:24.228552 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 16:04:24.228765 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:04:24.228972 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 16:04:24.229352 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 16:04:24.229963 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 16:04:24.230251 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 16:04:24.230488 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 16:04:24.230714 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 16:04:24.230910 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 16:04:24.231289 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 16:04:24.231319 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 16:04:24.231340 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 16:04:24.231360 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 16:04:24.231378 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 16:04:24.231397 kernel: iommu: Default domain type: Translated Feb 13 16:04:24.231423 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 16:04:24.231443 kernel: efivars: Registered efivars operations Feb 13 16:04:24.231461 kernel: vgaarb: loaded Feb 13 16:04:24.231497 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 16:04:24.231518 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:04:24.231537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:04:24.231556 kernel: pnp: PnP ACPI init Feb 13 16:04:24.231793 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 16:04:24.231821 kernel: pnp: PnP ACPI: found 1 devices Feb 13 16:04:24.231846 kernel: NET: Registered PF_INET protocol family Feb 13 16:04:24.231866 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:04:24.231885 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 16:04:24.231904 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:04:24.231923 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:04:24.231941 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 16:04:24.231960 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 16:04:24.231978 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:04:24.231997 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:04:24.232021 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:04:24.232039 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:04:24.232058 kernel: kvm [1]: HYP mode not available Feb 13 16:04:24.232076 kernel: Initialise system trusted keyrings Feb 13 16:04:24.232137 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 16:04:24.232157 kernel: Key type asymmetric registered Feb 13 16:04:24.232175 kernel: Asymmetric key parser 'x509' registered Feb 13 16:04:24.232195 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 16:04:24.232214 kernel: io scheduler mq-deadline registered Feb 13 16:04:24.232240 kernel: io scheduler kyber registered Feb 13 16:04:24.232259 kernel: io scheduler bfq registered Feb 13 16:04:24.232487 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 16:04:24.232517 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 16:04:24.232537 kernel: ACPI: button: Power Button [PWRB] Feb 13 16:04:24.232557 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 16:04:24.232576 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 16:04:24.232596 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:04:24.232624 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 16:04:24.232832 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 16:04:24.232860 kernel: printk: console [ttyS0] disabled Feb 13 16:04:24.232880 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 16:04:24.232898 kernel: printk: console [ttyS0] enabled Feb 13 16:04:24.232918 kernel: printk: bootconsole [uart0] disabled Feb 13 16:04:24.232936 kernel: thunder_xcv, ver 1.0 Feb 13 16:04:24.232955 kernel: thunder_bgx, ver 1.0 Feb 13 16:04:24.232973 kernel: nicpf, ver 1.0 Feb 13 16:04:24.232997 kernel: nicvf, ver 1.0 Feb 13 16:04:24.233224 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 16:04:24.233417 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T16:04:23 UTC (1739462663) Feb 13 16:04:24.233444 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:04:24.233463 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 16:04:24.233482 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 16:04:24.233501 kernel: watchdog: Hard watchdog permanently disabled Feb 13 16:04:24.233519 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:04:24.233544 kernel: Segment Routing with IPv6 Feb 13 16:04:24.233563 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:04:24.233581 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:04:24.233600 kernel: Key type dns_resolver registered Feb 13 16:04:24.233618 kernel: registered taskstats version 1 Feb 13 16:04:24.233637 kernel: Loading compiled-in X.509 certificates Feb 13 16:04:24.233656 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: d3f151cc07005f6a29244b13ac54c8677429c8f5' Feb 13 16:04:24.233674 kernel: Key type .fscrypt registered Feb 13 16:04:24.233692 kernel: Key type fscrypt-provisioning registered Feb 13 16:04:24.233716 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:04:24.233735 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:04:24.233753 kernel: ima: No architecture policies found Feb 13 16:04:24.233771 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 16:04:24.233790 kernel: clk: Disabling unused clocks Feb 13 16:04:24.233808 kernel: Freeing unused kernel memory: 39360K Feb 13 16:04:24.233827 kernel: Run /init as init process Feb 13 16:04:24.233845 kernel: with arguments: Feb 13 16:04:24.233864 kernel: /init Feb 13 16:04:24.233881 kernel: with environment: Feb 13 16:04:24.233905 kernel: HOME=/ Feb 13 16:04:24.233923 kernel: TERM=linux Feb 13 16:04:24.233941 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:04:24.233964 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:04:24.233987 systemd[1]: Detected virtualization amazon. Feb 13 16:04:24.234008 systemd[1]: Detected architecture arm64. Feb 13 16:04:24.234027 systemd[1]: Running in initrd. Feb 13 16:04:24.234052 systemd[1]: No hostname configured, using default hostname. Feb 13 16:04:24.234072 systemd[1]: Hostname set to . Feb 13 16:04:24.234140 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:04:24.234165 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:04:24.234186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:24.234208 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:24.234231 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:04:24.234252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:04:24.234281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:04:24.234302 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:04:24.234326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:04:24.234347 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:04:24.234368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:24.234389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:24.234410 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:04:24.234438 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:04:24.234460 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:04:24.234481 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:04:24.234501 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:04:24.234521 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:04:24.234542 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:04:24.234562 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:04:24.234583 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:24.234609 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:24.234630 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:24.234650 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:04:24.234685 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:04:24.234709 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:04:24.234730 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:04:24.234751 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:04:24.234772 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:04:24.234792 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:04:24.234818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:24.234840 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:04:24.234860 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:24.234881 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:04:24.234946 systemd-journald[250]: Collecting audit messages is disabled. Feb 13 16:04:24.234997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:04:24.235019 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:04:24.235039 systemd-journald[250]: Journal started Feb 13 16:04:24.235146 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2836a38474423b9f164c2e0cffbb7f) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:04:24.191740 systemd-modules-load[251]: Inserted module 'overlay' Feb 13 16:04:24.241682 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:04:24.247467 kernel: Bridge firewalling registered Feb 13 16:04:24.246308 systemd-modules-load[251]: Inserted module 'br_netfilter' Feb 13 16:04:24.256388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:04:24.258079 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:24.268167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:24.271040 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:24.294483 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:24.308124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:04:24.314460 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:04:24.322964 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:24.359613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:24.363047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:24.384590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:04:24.390654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:24.402037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:04:24.424267 dracut-cmdline[285]: dracut-dracut-053 Feb 13 16:04:24.432135 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:04:24.495959 systemd-resolved[287]: Positive Trust Anchors: Feb 13 16:04:24.496155 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:04:24.496225 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:04:24.571108 kernel: SCSI subsystem initialized Feb 13 16:04:24.578123 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:04:24.591124 kernel: iscsi: registered transport (tcp) Feb 13 16:04:24.613435 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:04:24.613513 kernel: QLogic iSCSI HBA Driver Feb 13 16:04:24.715117 kernel: random: crng init done Feb 13 16:04:24.715497 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 16:04:24.719129 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:04:24.735707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:24.748682 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:04:24.759447 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:04:24.802308 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:04:24.802386 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:04:24.804012 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:04:24.873147 kernel: raid6: neonx8 gen() 6635 MB/s Feb 13 16:04:24.890149 kernel: raid6: neonx4 gen() 6446 MB/s Feb 13 16:04:24.907139 kernel: raid6: neonx2 gen() 5411 MB/s Feb 13 16:04:24.924142 kernel: raid6: neonx1 gen() 3931 MB/s Feb 13 16:04:24.941161 kernel: raid6: int64x8 gen() 3751 MB/s Feb 13 16:04:24.958152 kernel: raid6: int64x4 gen() 3651 MB/s Feb 13 16:04:24.975195 kernel: raid6: int64x2 gen() 3550 MB/s Feb 13 16:04:24.993120 kernel: raid6: int64x1 gen() 2723 MB/s Feb 13 16:04:24.993268 kernel: raid6: using algorithm neonx8 gen() 6635 MB/s Feb 13 16:04:25.010950 kernel: raid6: .... xor() 4877 MB/s, rmw enabled Feb 13 16:04:25.011041 kernel: raid6: using neon recovery algorithm Feb 13 16:04:25.019144 kernel: xor: measuring software checksum speed Feb 13 16:04:25.021255 kernel: 8regs : 9545 MB/sec Feb 13 16:04:25.021324 kernel: 32regs : 11970 MB/sec Feb 13 16:04:25.022442 kernel: arm64_neon : 9552 MB/sec Feb 13 16:04:25.022520 kernel: xor: using function: 32regs (11970 MB/sec) Feb 13 16:04:25.115146 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:04:25.142357 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:04:25.156454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:25.202473 systemd-udevd[468]: Using default interface naming scheme 'v255'. Feb 13 16:04:25.213583 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:25.227429 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:04:25.273235 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Feb 13 16:04:25.354439 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:04:25.365701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:04:25.514677 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:25.530542 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:04:25.578060 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:04:25.583912 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:04:25.601280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:25.604279 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:04:25.638017 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:04:25.684879 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:04:25.748794 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 16:04:25.748881 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 16:04:25.779158 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 16:04:25.779450 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 16:04:25.779745 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b1:c8:76:48:f3 Feb 13 16:04:25.772687 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:04:25.772994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:25.782006 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:25.787710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:04:25.788011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:25.803787 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:25.827679 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:25.832906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:25.864119 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 16:04:25.866775 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 16:04:25.877113 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 16:04:25.884269 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:04:25.884337 kernel: GPT:9289727 != 16777215 Feb 13 16:04:25.885580 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:04:25.885639 kernel: GPT:9289727 != 16777215 Feb 13 16:04:25.885674 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:04:25.885705 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:25.891579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:25.904851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:25.955166 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:26.036343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 16:04:26.049153 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (515) Feb 13 16:04:26.063136 kernel: BTRFS: device fsid 39fc2625-8d65-490f-9a1f-39e365051e19 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (520) Feb 13 16:04:26.150172 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 16:04:26.182903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:04:26.198745 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 16:04:26.205548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 16:04:26.225514 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:04:26.240608 disk-uuid[660]: Primary Header is updated. Feb 13 16:04:26.240608 disk-uuid[660]: Secondary Entries is updated. Feb 13 16:04:26.240608 disk-uuid[660]: Secondary Header is updated. Feb 13 16:04:26.251131 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:26.261222 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:27.268136 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:27.271500 disk-uuid[661]: The operation has completed successfully. Feb 13 16:04:27.513549 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:04:27.514140 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:04:27.553386 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:04:27.575182 sh[921]: Success Feb 13 16:04:27.604130 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 16:04:27.718469 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:04:27.729318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:04:27.732496 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:04:27.769266 kernel: BTRFS info (device dm-0): first mount of filesystem 39fc2625-8d65-490f-9a1f-39e365051e19 Feb 13 16:04:27.769336 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:27.771392 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:04:27.772749 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:04:27.773864 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:04:27.949128 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 16:04:27.980713 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:04:27.982710 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:04:27.991513 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:04:27.996340 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:04:28.038724 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:28.038804 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:28.038862 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:28.045154 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:28.069383 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:04:28.072532 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:28.090637 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:04:28.106571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:04:28.232856 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:04:28.246519 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:04:28.306809 systemd-networkd[1113]: lo: Link UP Feb 13 16:04:28.306833 systemd-networkd[1113]: lo: Gained carrier Feb 13 16:04:28.311903 systemd-networkd[1113]: Enumeration completed Feb 13 16:04:28.312075 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:04:28.315719 systemd[1]: Reached target network.target - Network. Feb 13 16:04:28.321462 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:28.321480 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:04:28.330026 systemd-networkd[1113]: eth0: Link UP Feb 13 16:04:28.330045 systemd-networkd[1113]: eth0: Gained carrier Feb 13 16:04:28.330063 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:28.348187 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.19.49/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:04:28.582529 ignition[1036]: Ignition 2.19.0 Feb 13 16:04:28.583142 ignition[1036]: Stage: fetch-offline Feb 13 16:04:28.583848 ignition[1036]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:28.583904 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:28.584531 ignition[1036]: Ignition finished successfully Feb 13 16:04:28.593829 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:04:28.605467 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:04:28.635374 ignition[1125]: Ignition 2.19.0 Feb 13 16:04:28.635403 ignition[1125]: Stage: fetch Feb 13 16:04:28.636963 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:28.637004 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:28.637247 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:28.646635 ignition[1125]: PUT result: OK Feb 13 16:04:28.650145 ignition[1125]: parsed url from cmdline: "" Feb 13 16:04:28.650171 ignition[1125]: no config URL provided Feb 13 16:04:28.650192 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:04:28.650228 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:04:28.650271 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:28.652130 ignition[1125]: PUT result: OK Feb 13 16:04:28.652260 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 16:04:28.654772 ignition[1125]: GET result: OK Feb 13 16:04:28.658630 ignition[1125]: parsing config with SHA512: a390b846a6be586f4d6e9edd2473fc704e1643c56d1f65ba2aa1c8723a4dd8b053037aa98b8625662ca1b659ff2263424b6cabb9edf5271f890f86ae7c186343 Feb 13 16:04:28.671590 unknown[1125]: fetched base config from "system" Feb 13 16:04:28.671611 unknown[1125]: fetched base config from "system" Feb 13 16:04:28.673366 ignition[1125]: fetch: fetch complete Feb 13 16:04:28.671625 unknown[1125]: fetched user config from "aws" Feb 13 16:04:28.673389 ignition[1125]: fetch: fetch passed Feb 13 16:04:28.682722 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:04:28.673490 ignition[1125]: Ignition finished successfully Feb 13 16:04:28.703560 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:04:28.730364 ignition[1131]: Ignition 2.19.0 Feb 13 16:04:28.731936 ignition[1131]: Stage: kargs Feb 13 16:04:28.732645 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:28.732670 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:28.732832 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:28.741434 ignition[1131]: PUT result: OK Feb 13 16:04:28.746348 ignition[1131]: kargs: kargs passed Feb 13 16:04:28.747909 ignition[1131]: Ignition finished successfully Feb 13 16:04:28.752062 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:04:28.765785 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:04:28.794395 ignition[1137]: Ignition 2.19.0 Feb 13 16:04:28.795208 ignition[1137]: Stage: disks Feb 13 16:04:28.795859 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:28.795884 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:28.796074 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:28.800155 ignition[1137]: PUT result: OK Feb 13 16:04:28.810039 ignition[1137]: disks: disks passed Feb 13 16:04:28.810550 ignition[1137]: Ignition finished successfully Feb 13 16:04:28.817195 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:04:28.820072 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:04:28.823800 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:04:28.827740 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:04:28.829814 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:04:28.832014 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:04:28.854651 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:04:28.909037 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:04:28.918275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:04:28.929259 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:04:29.012146 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1daf3470-d909-4a02-84d2-f6d9b0a5b55c r/w with ordered data mode. Quota mode: none. Feb 13 16:04:29.014499 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:04:29.017756 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:04:29.043281 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:04:29.049253 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:04:29.053305 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 16:04:29.054427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:04:29.054475 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:04:29.080144 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1164) Feb 13 16:04:29.085925 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:29.085999 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:29.087202 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:29.092107 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:29.095245 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:04:29.103036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:04:29.127639 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:04:29.567278 systemd-networkd[1113]: eth0: Gained IPv6LL Feb 13 16:04:29.727693 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:04:29.751801 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:04:29.760930 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:04:29.770461 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:04:30.133340 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:04:30.143525 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:04:30.158323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:04:30.177389 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:30.177026 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:04:30.211190 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:04:30.233042 ignition[1277]: INFO : Ignition 2.19.0 Feb 13 16:04:30.233042 ignition[1277]: INFO : Stage: mount Feb 13 16:04:30.236816 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:30.236816 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:30.236816 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:30.244242 ignition[1277]: INFO : PUT result: OK Feb 13 16:04:30.249961 ignition[1277]: INFO : mount: mount passed Feb 13 16:04:30.251814 ignition[1277]: INFO : Ignition finished successfully Feb 13 16:04:30.254690 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:04:30.269569 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:04:30.296643 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:04:30.316112 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1288) Feb 13 16:04:30.320242 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:30.320291 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:30.320318 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:30.326124 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:30.329499 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:04:30.368149 ignition[1305]: INFO : Ignition 2.19.0 Feb 13 16:04:30.368149 ignition[1305]: INFO : Stage: files Feb 13 16:04:30.372521 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:30.372521 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:30.372521 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:30.372521 ignition[1305]: INFO : PUT result: OK Feb 13 16:04:30.384861 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:04:30.388441 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:04:30.388441 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:04:30.414300 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:04:30.417157 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:04:30.420296 unknown[1305]: wrote ssh authorized keys file for user: core Feb 13 16:04:30.424540 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:04:30.424540 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 16:04:30.424540 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 16:04:30.424540 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:04:30.424540 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 16:04:30.556957 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 16:04:30.711436 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:04:30.715169 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:04:30.718506 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:04:30.718506 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:04:30.725732 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:04:30.725732 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:04:30.725732 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:04:30.725732 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:04:30.725732 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:04:30.746672 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:04:30.746672 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:04:30.746672 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:04:30.746672 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:04:30.746672 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:04:30.746672 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 16:04:31.016362 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 16:04:31.395250 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:04:31.395250 ignition[1305]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:04:31.403922 ignition[1305]: INFO : files: files passed Feb 13 16:04:31.403922 ignition[1305]: INFO : Ignition finished successfully Feb 13 16:04:31.446162 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:04:31.458424 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:04:31.465373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:04:31.475788 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:04:31.478351 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:04:31.513256 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:31.513256 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:31.521603 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:31.529258 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:04:31.533219 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:04:31.558164 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:04:31.627079 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:04:31.628735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:04:31.632616 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:04:31.638525 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:04:31.642736 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:04:31.652507 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:04:31.703320 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:04:31.714461 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:04:31.749362 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:31.752427 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:31.756827 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:04:31.759285 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:04:31.760472 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:04:31.771073 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:04:31.775581 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:04:31.778982 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:04:31.783010 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:04:31.785963 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:04:31.794576 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:04:31.794984 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:04:31.801442 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:04:31.803687 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:04:31.806577 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:04:31.810933 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:04:31.811241 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:04:31.818072 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:31.818507 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:31.827148 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:04:31.829160 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:31.833461 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:04:31.833778 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:04:31.834970 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:04:31.835756 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:04:31.856687 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:04:31.856925 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:04:31.883655 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:04:31.890528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:04:31.903963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:04:31.904469 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:31.910648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:04:31.914003 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:04:31.933730 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:04:31.939314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:04:31.958197 ignition[1357]: INFO : Ignition 2.19.0 Feb 13 16:04:31.958197 ignition[1357]: INFO : Stage: umount Feb 13 16:04:31.963000 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:31.963000 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:31.963000 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:31.971515 ignition[1357]: INFO : PUT result: OK Feb 13 16:04:31.975305 ignition[1357]: INFO : umount: umount passed Feb 13 16:04:31.978644 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:04:31.980784 ignition[1357]: INFO : Ignition finished successfully Feb 13 16:04:31.987039 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:04:31.989171 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:04:31.995624 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:04:31.997329 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:04:32.000685 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:04:32.000877 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:04:32.005612 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:04:32.007244 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:04:32.011706 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:04:32.011842 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:04:32.015342 systemd[1]: Stopped target network.target - Network. Feb 13 16:04:32.017332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:04:32.017458 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:04:32.027735 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:04:32.029567 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:04:32.039574 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:32.042162 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:04:32.048551 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:04:32.050581 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:04:32.050714 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:04:32.053249 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:04:32.053330 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:04:32.055433 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:04:32.055541 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:04:32.057587 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:04:32.057685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:04:32.060038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:04:32.060389 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:04:32.066879 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:04:32.078401 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:04:32.085609 systemd-networkd[1113]: eth0: DHCPv6 lease lost Feb 13 16:04:32.091256 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:04:32.091900 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:04:32.098838 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:04:32.099328 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:04:32.112249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:04:32.112353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:32.123482 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:04:32.130482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:04:32.130766 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:04:32.139174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:04:32.139281 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:32.141248 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:04:32.141327 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:32.143306 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:04:32.143398 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:32.154821 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:32.186840 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:04:32.187278 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:32.195398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:04:32.195515 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:32.199978 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:04:32.200261 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:32.203870 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:04:32.204047 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:04:32.209591 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:04:32.209801 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:04:32.216271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:04:32.216523 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:32.236628 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:04:32.242021 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:04:32.242197 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:32.244806 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:04:32.244924 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:32.247594 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:04:32.247752 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:32.253551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:04:32.253675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:32.265020 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:04:32.268625 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:04:32.272549 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:04:32.272880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:04:32.282967 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:04:32.307955 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:04:32.365708 systemd[1]: Switching root. Feb 13 16:04:32.399703 systemd-journald[250]: Journal stopped Feb 13 16:04:37.332628 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Feb 13 16:04:37.332843 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:04:37.332903 kernel: SELinux: policy capability open_perms=1 Feb 13 16:04:37.332949 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:04:37.332989 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:04:37.333024 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:04:37.333059 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:04:37.333152 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:04:37.333190 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:04:37.333224 kernel: audit: type=1403 audit(1739462674.240:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:04:37.333279 systemd[1]: Successfully loaded SELinux policy in 64.007ms. Feb 13 16:04:37.333347 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.924ms. Feb 13 16:04:37.333391 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:04:37.333427 systemd[1]: Detected virtualization amazon. Feb 13 16:04:37.333462 systemd[1]: Detected architecture arm64. Feb 13 16:04:37.333497 systemd[1]: Detected first boot. Feb 13 16:04:37.333530 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:04:37.333564 zram_generator::config[1417]: No configuration found. Feb 13 16:04:37.333606 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:04:37.333640 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:04:37.333680 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 16:04:37.333712 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:04:37.333745 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:04:37.333777 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:04:37.333811 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:04:37.333845 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:04:37.333879 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:04:37.333912 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:04:37.333949 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:04:37.333986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:37.334020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:37.334052 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:04:37.336944 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:04:37.337057 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:04:37.337135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:04:37.337173 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:04:37.337210 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:37.337259 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:04:37.337290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:37.337325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:04:37.337362 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:04:37.337396 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:04:37.337426 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:04:37.337459 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:04:37.337490 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:04:37.337527 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:04:37.337561 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:37.337592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:37.337624 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:37.337656 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:04:37.337687 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:04:37.337717 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:04:37.337748 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:04:37.337781 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:04:37.337816 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:04:37.337852 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:04:37.337887 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:04:37.337921 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:37.337951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:04:37.337984 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:04:37.338017 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:37.338049 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:04:37.338116 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:37.338728 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:04:37.338782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:37.338822 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:04:37.338862 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 16:04:37.338901 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 16:04:37.338932 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:04:37.338962 kernel: loop: module loaded Feb 13 16:04:37.338996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:04:37.339025 kernel: fuse: init (API version 7.39) Feb 13 16:04:37.339071 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:04:37.340296 kernel: ACPI: bus type drm_connector registered Feb 13 16:04:37.340345 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:04:37.340395 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:04:37.340429 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:04:37.340460 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:04:37.340490 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:04:37.340521 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:04:37.340553 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:04:37.340728 systemd-journald[1517]: Collecting audit messages is disabled. Feb 13 16:04:37.340809 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:04:37.340846 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:37.340882 systemd-journald[1517]: Journal started Feb 13 16:04:37.340936 systemd-journald[1517]: Runtime Journal (/run/log/journal/ec2836a38474423b9f164c2e0cffbb7f) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:04:37.350158 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:04:37.356420 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:04:37.356932 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:04:37.362309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:37.362837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:37.365999 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:04:37.366511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:04:37.370354 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:04:37.373987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:37.374779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:37.379672 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:04:37.380209 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:04:37.383539 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:37.387555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:37.391927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:37.398747 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:04:37.402511 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:04:37.437785 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:04:37.448375 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:04:37.463391 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:04:37.466352 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:04:37.480408 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:04:37.505524 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:04:37.508415 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:04:37.526440 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:04:37.528786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:04:37.534637 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:04:37.554393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:04:37.562872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:04:37.566484 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:04:37.578520 systemd-journald[1517]: Time spent on flushing to /var/log/journal/ec2836a38474423b9f164c2e0cffbb7f is 75.788ms for 895 entries. Feb 13 16:04:37.578520 systemd-journald[1517]: System Journal (/var/log/journal/ec2836a38474423b9f164c2e0cffbb7f) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:04:37.672837 systemd-journald[1517]: Received client request to flush runtime journal. Feb 13 16:04:37.635883 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:04:37.640710 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:04:37.648613 systemd-tmpfiles[1569]: ACLs are not supported, ignoring. Feb 13 16:04:37.648639 systemd-tmpfiles[1569]: ACLs are not supported, ignoring. Feb 13 16:04:37.669908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:37.680789 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:37.687271 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:04:37.707533 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:04:37.727438 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:04:37.731070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:37.777742 udevadm[1586]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 16:04:37.806972 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:04:37.820503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:04:37.860983 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Feb 13 16:04:37.861044 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Feb 13 16:04:37.875032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:38.833414 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:04:38.851649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:38.897942 systemd-udevd[1597]: Using default interface naming scheme 'v255'. Feb 13 16:04:38.943140 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:38.965457 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:04:38.997518 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:04:39.069138 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 16:04:39.100442 (udev-worker)[1617]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:39.153250 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:04:39.309915 systemd-networkd[1607]: lo: Link UP Feb 13 16:04:39.309935 systemd-networkd[1607]: lo: Gained carrier Feb 13 16:04:39.313712 systemd-networkd[1607]: Enumeration completed Feb 13 16:04:39.313943 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:04:39.317000 systemd-networkd[1607]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:39.317023 systemd-networkd[1607]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:04:39.319298 systemd-networkd[1607]: eth0: Link UP Feb 13 16:04:39.319618 systemd-networkd[1607]: eth0: Gained carrier Feb 13 16:04:39.319662 systemd-networkd[1607]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:39.326591 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:04:39.341496 systemd-networkd[1607]: eth0: DHCPv4 address 172.31.19.49/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:04:39.344997 systemd-networkd[1607]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:39.443868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:39.497188 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1604) Feb 13 16:04:39.644311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:39.726957 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:04:39.759376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:04:39.769411 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:04:39.815158 lvm[1726]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:04:39.853880 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:04:39.858571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:39.869380 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:04:39.882069 lvm[1729]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:04:39.921684 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:04:39.924673 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:04:39.927731 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:04:39.927980 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:04:39.930371 systemd[1]: Reached target machines.target - Containers. Feb 13 16:04:39.934641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:04:39.944474 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:04:39.956229 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:04:39.958876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:39.961396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:04:39.972444 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:04:39.981831 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:04:39.992404 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:04:40.018744 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:04:40.052152 kernel: loop0: detected capacity change from 0 to 52536 Feb 13 16:04:40.094171 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:04:40.096443 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:04:40.196205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:04:40.224146 kernel: loop1: detected capacity change from 0 to 194512 Feb 13 16:04:40.285123 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 16:04:40.445148 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 16:04:40.541171 kernel: loop4: detected capacity change from 0 to 52536 Feb 13 16:04:40.555144 kernel: loop5: detected capacity change from 0 to 194512 Feb 13 16:04:40.579162 kernel: loop6: detected capacity change from 0 to 114432 Feb 13 16:04:40.602159 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 16:04:40.617610 (sd-merge)[1750]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 16:04:40.618795 (sd-merge)[1750]: Merged extensions into '/usr'. Feb 13 16:04:40.626070 systemd[1]: Reloading requested from client PID 1737 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:04:40.626380 systemd[1]: Reloading... Feb 13 16:04:40.639321 systemd-networkd[1607]: eth0: Gained IPv6LL Feb 13 16:04:40.778132 zram_generator::config[1782]: No configuration found. Feb 13 16:04:41.073107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:04:41.219555 systemd[1]: Reloading finished in 592 ms. Feb 13 16:04:41.248670 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:04:41.252302 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:04:41.268526 systemd[1]: Starting ensure-sysext.service... Feb 13 16:04:41.279737 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:04:41.302330 systemd[1]: Reloading requested from client PID 1837 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:04:41.302387 systemd[1]: Reloading... Feb 13 16:04:41.355546 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:04:41.357476 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:04:41.360444 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:04:41.361384 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Feb 13 16:04:41.361605 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Feb 13 16:04:41.374807 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:04:41.377253 systemd-tmpfiles[1838]: Skipping /boot Feb 13 16:04:41.404510 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:04:41.404534 systemd-tmpfiles[1838]: Skipping /boot Feb 13 16:04:41.484131 zram_generator::config[1870]: No configuration found. Feb 13 16:04:41.735388 ldconfig[1733]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:04:41.790778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:04:41.938207 systemd[1]: Reloading finished in 634 ms. Feb 13 16:04:41.968844 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:04:41.978203 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:42.002584 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:42.008400 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:04:42.028416 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:04:42.039603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:04:42.046440 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:04:42.064535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:42.079581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:42.085608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:42.103610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:42.106796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:42.117901 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:04:42.132151 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:42.132542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:42.144603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:42.145000 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:42.157700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:42.161526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:42.185194 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:04:42.194282 augenrules[1962]: No rules Feb 13 16:04:42.195849 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:42.205420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:42.222513 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:04:42.241598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:42.263726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:42.269988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:42.271624 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:04:42.295820 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:04:42.302328 systemd[1]: Finished ensure-sysext.service. Feb 13 16:04:42.305997 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:42.311829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:42.315205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:42.331812 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:04:42.332379 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:04:42.343318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:42.345225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:42.348596 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:42.349181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:42.382861 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:04:42.395707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:04:42.395921 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:04:42.396009 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:04:42.412047 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:04:42.434741 systemd-resolved[1940]: Positive Trust Anchors: Feb 13 16:04:42.435387 systemd-resolved[1940]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:04:42.435536 systemd-resolved[1940]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:04:42.444078 systemd-resolved[1940]: Defaulting to hostname 'linux'. Feb 13 16:04:42.447707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:04:42.450173 systemd[1]: Reached target network.target - Network. Feb 13 16:04:42.452039 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:04:42.454216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:42.456443 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:04:42.458598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:04:42.460929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:04:42.463538 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:04:42.466135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:04:42.468775 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:04:42.471178 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:04:42.471243 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:04:42.472941 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:04:42.476240 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:04:42.481154 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:04:42.485753 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:04:42.490927 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:04:42.493171 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:04:42.495130 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:04:42.497273 systemd[1]: System is tainted: cgroupsv1 Feb 13 16:04:42.497503 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:04:42.497664 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:04:42.505309 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:04:42.510349 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:04:42.528623 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:04:42.534153 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:04:42.543003 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:04:42.546252 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:04:42.555561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:04:42.563785 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:04:42.582403 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 16:04:42.600293 jq[1998]: false Feb 13 16:04:42.607434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:04:42.639488 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 16:04:42.675897 dbus-daemon[1997]: [system] SELinux support is enabled Feb 13 16:04:42.676274 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 16:04:42.683726 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:04:42.687325 dbus-daemon[1997]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1607 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 16:04:42.693505 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:04:42.720536 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:04:42.724221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:04:42.741521 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:04:42.772324 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:04:42.781995 jq[2026]: true Feb 13 16:04:42.789351 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:04:42.794338 extend-filesystems[1999]: Found loop4 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found loop5 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found loop6 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found loop7 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1p1 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1p2 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1p3 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found usr Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1p4 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1p6 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1p7 Feb 13 16:04:42.797857 extend-filesystems[1999]: Found nvme0n1p9 Feb 13 16:04:42.797857 extend-filesystems[1999]: Checking size of /dev/nvme0n1p9 Feb 13 16:04:42.825040 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:04:42.825593 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:04:42.831964 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:04:42.832540 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:04:42.866840 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:04:42.871992 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:04:42.882576 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:04:42.950661 extend-filesystems[1999]: Resized partition /dev/nvme0n1p9 Feb 13 16:04:42.974809 extend-filesystems[2048]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:04:43.003635 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 16:04:43.021176 (ntainerd)[2047]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:04:43.026629 update_engine[2024]: I20250213 16:04:43.022493 2024 main.cc:92] Flatcar Update Engine starting Feb 13 16:04:43.048253 ntpd[2002]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: ---------------------------------------------------- Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: corporation. Support and training for ntp-4 are Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: available at https://www.nwtime.org/support Feb 13 16:04:43.050797 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: ---------------------------------------------------- Feb 13 16:04:43.048320 ntpd[2002]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:04:43.048347 ntpd[2002]: ---------------------------------------------------- Feb 13 16:04:43.066298 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: proto: precision = 0.096 usec (-23) Feb 13 16:04:43.048372 ntpd[2002]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:04:43.048394 ntpd[2002]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:04:43.048413 ntpd[2002]: corporation. Support and training for ntp-4 are Feb 13 16:04:43.048434 ntpd[2002]: available at https://www.nwtime.org/support Feb 13 16:04:43.048453 ntpd[2002]: ---------------------------------------------------- Feb 13 16:04:43.065193 ntpd[2002]: proto: precision = 0.096 usec (-23) Feb 13 16:04:43.067441 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 16:04:43.080831 update_engine[2024]: I20250213 16:04:43.080399 2024 update_check_scheduler.cc:74] Next update check in 4m38s Feb 13 16:04:43.069328 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:04:43.081459 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: basedate set to 2025-02-01 Feb 13 16:04:43.081459 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: gps base set to 2025-02-02 (week 2352) Feb 13 16:04:43.079366 ntpd[2002]: basedate set to 2025-02-01 Feb 13 16:04:43.069445 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:04:43.079409 ntpd[2002]: gps base set to 2025-02-02 (week 2352) Feb 13 16:04:43.072514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:04:43.072576 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:04:43.086340 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:04:43.098471 ntpd[2002]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:04:43.098566 ntpd[2002]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:04:43.098694 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:04:43.098694 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:04:43.100798 ntpd[2002]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:04:43.100960 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:04:43.103960 ntpd[2002]: Listen normally on 3 eth0 172.31.19.49:123 Feb 13 16:04:43.105253 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Listen normally on 3 eth0 172.31.19.49:123 Feb 13 16:04:43.105253 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Listen normally on 4 lo [::1]:123 Feb 13 16:04:43.105253 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Listen normally on 5 eth0 [fe80::4b1:c8ff:fe76:48f3%2]:123 Feb 13 16:04:43.105253 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: Listening on routing socket on fd #22 for interface updates Feb 13 16:04:43.104158 ntpd[2002]: Listen normally on 4 lo [::1]:123 Feb 13 16:04:43.104258 ntpd[2002]: Listen normally on 5 eth0 [fe80::4b1:c8ff:fe76:48f3%2]:123 Feb 13 16:04:43.104338 ntpd[2002]: Listening on routing socket on fd #22 for interface updates Feb 13 16:04:43.109114 jq[2044]: true Feb 13 16:04:43.131486 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 16:04:43.134605 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:04:43.142314 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:04:43.157568 ntpd[2002]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:43.160331 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:43.160331 ntpd[2002]: 13 Feb 16:04:43 ntpd[2002]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:43.157645 ntpd[2002]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:43.205966 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 16:04:43.206073 tar[2038]: linux-arm64/helm Feb 13 16:04:43.280378 extend-filesystems[2048]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 16:04:43.280378 extend-filesystems[2048]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 16:04:43.280378 extend-filesystems[2048]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 16:04:43.258944 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 16:04:43.315765 extend-filesystems[1999]: Resized filesystem in /dev/nvme0n1p9 Feb 13 16:04:43.287494 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 16:04:43.308269 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:04:43.308786 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:04:43.390217 coreos-metadata[1995]: Feb 13 16:04:43.386 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:04:43.398227 coreos-metadata[1995]: Feb 13 16:04:43.391 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 16:04:43.398227 coreos-metadata[1995]: Feb 13 16:04:43.392 INFO Fetch successful Feb 13 16:04:43.398227 coreos-metadata[1995]: Feb 13 16:04:43.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 16:04:43.402209 coreos-metadata[1995]: Feb 13 16:04:43.402 INFO Fetch successful Feb 13 16:04:43.402209 coreos-metadata[1995]: Feb 13 16:04:43.402 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 16:04:43.409319 coreos-metadata[1995]: Feb 13 16:04:43.404 INFO Fetch successful Feb 13 16:04:43.409319 coreos-metadata[1995]: Feb 13 16:04:43.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 16:04:43.410171 systemd-logind[2020]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 16:04:43.410216 systemd-logind[2020]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 16:04:43.410672 systemd-logind[2020]: New seat seat0. Feb 13 16:04:43.414467 coreos-metadata[1995]: Feb 13 16:04:43.411 INFO Fetch successful Feb 13 16:04:43.414467 coreos-metadata[1995]: Feb 13 16:04:43.411 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 16:04:43.414467 coreos-metadata[1995]: Feb 13 16:04:43.414 INFO Fetch failed with 404: resource not found Feb 13 16:04:43.414467 coreos-metadata[1995]: Feb 13 16:04:43.414 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 16:04:43.426238 coreos-metadata[1995]: Feb 13 16:04:43.417 INFO Fetch successful Feb 13 16:04:43.426238 coreos-metadata[1995]: Feb 13 16:04:43.418 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 16:04:43.426238 coreos-metadata[1995]: Feb 13 16:04:43.424 INFO Fetch successful Feb 13 16:04:43.426238 coreos-metadata[1995]: Feb 13 16:04:43.424 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 16:04:43.424105 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:04:43.432669 coreos-metadata[1995]: Feb 13 16:04:43.430 INFO Fetch successful Feb 13 16:04:43.432669 coreos-metadata[1995]: Feb 13 16:04:43.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 16:04:43.437638 coreos-metadata[1995]: Feb 13 16:04:43.437 INFO Fetch successful Feb 13 16:04:43.437638 coreos-metadata[1995]: Feb 13 16:04:43.437 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 16:04:43.439047 coreos-metadata[1995]: Feb 13 16:04:43.438 INFO Fetch successful Feb 13 16:04:43.576478 bash[2116]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:04:43.625878 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2098) Feb 13 16:04:43.626820 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:04:43.655072 amazon-ssm-agent[2079]: Initializing new seelog logger Feb 13 16:04:43.662280 amazon-ssm-agent[2079]: New Seelog Logger Creation Complete Feb 13 16:04:43.662280 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.662280 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.670179 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.672617 systemd[1]: Starting sshkeys.service... Feb 13 16:04:43.688507 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.688507 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.688738 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.689067 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.689067 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.699968 amazon-ssm-agent[2079]: 2025-02-13 16:04:43 INFO Proxy environment variables: Feb 13 16:04:43.702886 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:04:43.708415 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:04:43.711347 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.738243 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.738243 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.738441 amazon-ssm-agent[2079]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.776050 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:04:43.783743 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:04:43.798528 amazon-ssm-agent[2079]: 2025-02-13 16:04:43 INFO no_proxy: Feb 13 16:04:43.842592 locksmithd[2067]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:04:43.903730 amazon-ssm-agent[2079]: 2025-02-13 16:04:43 INFO https_proxy: Feb 13 16:04:44.015707 amazon-ssm-agent[2079]: 2025-02-13 16:04:43 INFO http_proxy: Feb 13 16:04:44.118984 amazon-ssm-agent[2079]: 2025-02-13 16:04:43 INFO Checking if agent identity type OnPrem can be assumed Feb 13 16:04:44.129289 containerd[2047]: time="2025-02-13T16:04:44.127738742Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 16:04:44.219406 amazon-ssm-agent[2079]: 2025-02-13 16:04:43 INFO Checking if agent identity type EC2 can be assumed Feb 13 16:04:44.327997 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO Agent will take identity from EC2 Feb 13 16:04:44.374171 coreos-metadata[2151]: Feb 13 16:04:44.373 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:04:44.377469 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 16:04:44.377741 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 16:04:44.380625 coreos-metadata[2151]: Feb 13 16:04:44.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 16:04:44.384146 coreos-metadata[2151]: Feb 13 16:04:44.381 INFO Fetch successful Feb 13 16:04:44.384146 coreos-metadata[2151]: Feb 13 16:04:44.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 16:04:44.386390 coreos-metadata[2151]: Feb 13 16:04:44.384 INFO Fetch successful Feb 13 16:04:44.386865 dbus-daemon[1997]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2066 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 16:04:44.388846 unknown[2151]: wrote ssh authorized keys file for user: core Feb 13 16:04:44.421041 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:44.436605 containerd[2047]: time="2025-02-13T16:04:44.436150839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:44.448782 containerd[2047]: time="2025-02-13T16:04:44.447708508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:44.448782 containerd[2047]: time="2025-02-13T16:04:44.447789148Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:04:44.448782 containerd[2047]: time="2025-02-13T16:04:44.447831076Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:04:44.448782 containerd[2047]: time="2025-02-13T16:04:44.448352848Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:04:44.448782 containerd[2047]: time="2025-02-13T16:04:44.448408096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:44.448782 containerd[2047]: time="2025-02-13T16:04:44.448587316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:44.448782 containerd[2047]: time="2025-02-13T16:04:44.448626928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:44.452253 containerd[2047]: time="2025-02-13T16:04:44.451067152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:44.452484 containerd[2047]: time="2025-02-13T16:04:44.452435044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:44.452628 containerd[2047]: time="2025-02-13T16:04:44.452593600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:44.452731 containerd[2047]: time="2025-02-13T16:04:44.452702596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:44.453065 containerd[2047]: time="2025-02-13T16:04:44.453030028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:44.455067 containerd[2047]: time="2025-02-13T16:04:44.455021464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:44.459403 containerd[2047]: time="2025-02-13T16:04:44.457067608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:44.459403 containerd[2047]: time="2025-02-13T16:04:44.457190068Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:04:44.459403 containerd[2047]: time="2025-02-13T16:04:44.457447360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:04:44.459403 containerd[2047]: time="2025-02-13T16:04:44.457611004Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:04:44.473909 containerd[2047]: time="2025-02-13T16:04:44.473836000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:04:44.474049 containerd[2047]: time="2025-02-13T16:04:44.473948992Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:04:44.474049 containerd[2047]: time="2025-02-13T16:04:44.473990560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:04:44.474049 containerd[2047]: time="2025-02-13T16:04:44.474026056Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:04:44.474256 containerd[2047]: time="2025-02-13T16:04:44.474064924Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:04:44.474453 containerd[2047]: time="2025-02-13T16:04:44.474390172Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475079344Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475469740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475513732Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475550932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475584052Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475618192Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475659700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475701412Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475735420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475775716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475819132Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475848844Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475891252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.477118 containerd[2047]: time="2025-02-13T16:04:44.475924672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.477788 containerd[2047]: time="2025-02-13T16:04:44.475955776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.477788 containerd[2047]: time="2025-02-13T16:04:44.475987468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.477788 containerd[2047]: time="2025-02-13T16:04:44.476018848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.477788 containerd[2047]: time="2025-02-13T16:04:44.476050576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.478842 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 16:04:44.493725 containerd[2047]: time="2025-02-13T16:04:44.493642372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.493883 containerd[2047]: time="2025-02-13T16:04:44.493731268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.493883 containerd[2047]: time="2025-02-13T16:04:44.493773472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.493883 containerd[2047]: time="2025-02-13T16:04:44.493812292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.493883 containerd[2047]: time="2025-02-13T16:04:44.493850524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.494053 containerd[2047]: time="2025-02-13T16:04:44.493881988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.494053 containerd[2047]: time="2025-02-13T16:04:44.493916980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.494053 containerd[2047]: time="2025-02-13T16:04:44.493954600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:04:44.494053 containerd[2047]: time="2025-02-13T16:04:44.494002192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.494053 containerd[2047]: time="2025-02-13T16:04:44.494032300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.494338 containerd[2047]: time="2025-02-13T16:04:44.494060236Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:04:44.494338 containerd[2047]: time="2025-02-13T16:04:44.494216296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:04:44.494338 containerd[2047]: time="2025-02-13T16:04:44.494255920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:04:44.494338 containerd[2047]: time="2025-02-13T16:04:44.494286652Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:04:44.494338 containerd[2047]: time="2025-02-13T16:04:44.494315920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:04:44.494559 containerd[2047]: time="2025-02-13T16:04:44.494340568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.494559 containerd[2047]: time="2025-02-13T16:04:44.494369872Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:04:44.494559 containerd[2047]: time="2025-02-13T16:04:44.494402152Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:04:44.494559 containerd[2047]: time="2025-02-13T16:04:44.494428708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:04:44.496631 containerd[2047]: time="2025-02-13T16:04:44.494953864Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:04:44.496631 containerd[2047]: time="2025-02-13T16:04:44.495106864Z" level=info msg="Connect containerd service" Feb 13 16:04:44.496631 containerd[2047]: time="2025-02-13T16:04:44.495197548Z" level=info msg="using legacy CRI server" Feb 13 16:04:44.496631 containerd[2047]: time="2025-02-13T16:04:44.495217324Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:04:44.496631 containerd[2047]: time="2025-02-13T16:04:44.495397084Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.506809432Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507477664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507589696Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507677692Z" level=info msg="Start subscribing containerd event" Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507745084Z" level=info msg="Start recovering state" Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507864064Z" level=info msg="Start event monitor" Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507895012Z" level=info msg="Start snapshots syncer" Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507929032Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:04:44.509098 containerd[2047]: time="2025-02-13T16:04:44.507948952Z" level=info msg="Start streaming server" Feb 13 16:04:44.520342 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:44.520916 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:04:44.548436 containerd[2047]: time="2025-02-13T16:04:44.508072636Z" level=info msg="containerd successfully booted in 0.392149s" Feb 13 16:04:44.599134 update-ssh-keys[2225]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:04:44.599791 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:04:44.622762 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:44.643300 systemd[1]: Finished sshkeys.service. Feb 13 16:04:44.660845 polkitd[2217]: Started polkitd version 121 Feb 13 16:04:44.721340 polkitd[2217]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 16:04:44.721999 polkitd[2217]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 16:04:44.730820 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 16:04:44.737310 polkitd[2217]: Finished loading, compiling and executing 2 rules Feb 13 16:04:44.744980 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 16:04:44.746248 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 16:04:44.753661 polkitd[2217]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 16:04:44.845377 systemd-resolved[1940]: System hostname changed to 'ip-172-31-19-49'. Feb 13 16:04:44.845564 systemd-hostnamed[2066]: Hostname set to (transient) Feb 13 16:04:44.855915 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 16:04:44.954610 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 16:04:45.012957 sshd_keygen[2046]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:04:45.055408 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 16:04:45.145719 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:04:45.156686 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [Registrar] Starting registrar module Feb 13 16:04:45.163930 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:04:45.210021 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:04:45.211068 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:04:45.230658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:04:45.259300 amazon-ssm-agent[2079]: 2025-02-13 16:04:44 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 16:04:45.287856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:04:45.308001 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:04:45.322707 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:04:45.325510 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:04:45.432502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:04:45.455890 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:04:45.800774 tar[2038]: linux-arm64/LICENSE Feb 13 16:04:45.804216 tar[2038]: linux-arm64/README.md Feb 13 16:04:45.844799 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 16:04:45.847875 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:04:45.852113 systemd[1]: Startup finished in 11.747s (kernel) + 11.673s (userspace) = 23.420s. Feb 13 16:04:46.339419 kubelet[2280]: E0213 16:04:46.339014 2280 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:04:46.348543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:04:46.349013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:04:46.673762 amazon-ssm-agent[2079]: 2025-02-13 16:04:46 INFO [EC2Identity] EC2 registration was successful. Feb 13 16:04:46.711936 amazon-ssm-agent[2079]: 2025-02-13 16:04:46 INFO [CredentialRefresher] credentialRefresher has started Feb 13 16:04:46.712255 amazon-ssm-agent[2079]: 2025-02-13 16:04:46 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 16:04:46.712552 amazon-ssm-agent[2079]: 2025-02-13 16:04:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 16:04:46.775292 amazon-ssm-agent[2079]: 2025-02-13 16:04:46 INFO [CredentialRefresher] Next credential rotation will be in 30.958316747533335 minutes Feb 13 16:04:47.756597 amazon-ssm-agent[2079]: 2025-02-13 16:04:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 16:04:47.858713 amazon-ssm-agent[2079]: 2025-02-13 16:04:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2300) started Feb 13 16:04:47.959193 amazon-ssm-agent[2079]: 2025-02-13 16:04:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 16:04:49.633940 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:04:49.643649 systemd[1]: Started sshd@0-172.31.19.49:22-139.178.68.195:57268.service - OpenSSH per-connection server daemon (139.178.68.195:57268). Feb 13 16:04:49.912823 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 57268 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:49.915591 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:49.933467 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:04:49.940609 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:04:49.946737 systemd-logind[2020]: New session 1 of user core. Feb 13 16:04:49.979625 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:04:50.001809 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:04:50.009539 (systemd)[2315]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:04:50.216266 systemd-resolved[1940]: Clock change detected. Flushing caches. Feb 13 16:04:50.396018 systemd[2315]: Queued start job for default target default.target. Feb 13 16:04:50.396979 systemd[2315]: Created slice app.slice - User Application Slice. Feb 13 16:04:50.397061 systemd[2315]: Reached target paths.target - Paths. Feb 13 16:04:50.397109 systemd[2315]: Reached target timers.target - Timers. Feb 13 16:04:50.405639 systemd[2315]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:04:50.423461 systemd[2315]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:04:50.424900 systemd[2315]: Reached target sockets.target - Sockets. Feb 13 16:04:50.424955 systemd[2315]: Reached target basic.target - Basic System. Feb 13 16:04:50.425055 systemd[2315]: Reached target default.target - Main User Target. Feb 13 16:04:50.425123 systemd[2315]: Startup finished in 235ms. Feb 13 16:04:50.425853 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:04:50.433104 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:04:50.588032 systemd[1]: Started sshd@1-172.31.19.49:22-139.178.68.195:57270.service - OpenSSH per-connection server daemon (139.178.68.195:57270). Feb 13 16:04:50.764852 sshd[2328]: Accepted publickey for core from 139.178.68.195 port 57270 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:50.767579 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:50.775749 systemd-logind[2020]: New session 2 of user core. Feb 13 16:04:50.783954 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:04:50.917056 sshd[2328]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:50.922832 systemd[1]: sshd@1-172.31.19.49:22-139.178.68.195:57270.service: Deactivated successfully. Feb 13 16:04:50.930315 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:04:50.931260 systemd-logind[2020]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:04:50.934294 systemd-logind[2020]: Removed session 2. Feb 13 16:04:50.953984 systemd[1]: Started sshd@2-172.31.19.49:22-139.178.68.195:57284.service - OpenSSH per-connection server daemon (139.178.68.195:57284). Feb 13 16:04:51.125657 sshd[2336]: Accepted publickey for core from 139.178.68.195 port 57284 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:51.128338 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:51.139301 systemd-logind[2020]: New session 3 of user core. Feb 13 16:04:51.143105 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:04:51.266745 sshd[2336]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:51.274048 systemd[1]: sshd@2-172.31.19.49:22-139.178.68.195:57284.service: Deactivated successfully. Feb 13 16:04:51.275690 systemd-logind[2020]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:04:51.280294 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:04:51.282404 systemd-logind[2020]: Removed session 3. Feb 13 16:04:51.296918 systemd[1]: Started sshd@3-172.31.19.49:22-139.178.68.195:57298.service - OpenSSH per-connection server daemon (139.178.68.195:57298). Feb 13 16:04:51.483126 sshd[2344]: Accepted publickey for core from 139.178.68.195 port 57298 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:51.485947 sshd[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:51.497193 systemd-logind[2020]: New session 4 of user core. Feb 13 16:04:51.509014 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:04:51.646149 sshd[2344]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:51.654301 systemd[1]: sshd@3-172.31.19.49:22-139.178.68.195:57298.service: Deactivated successfully. Feb 13 16:04:51.660930 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:04:51.662806 systemd-logind[2020]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:04:51.664712 systemd-logind[2020]: Removed session 4. Feb 13 16:04:51.674991 systemd[1]: Started sshd@4-172.31.19.49:22-139.178.68.195:57306.service - OpenSSH per-connection server daemon (139.178.68.195:57306). Feb 13 16:04:51.864380 sshd[2352]: Accepted publickey for core from 139.178.68.195 port 57306 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:51.866249 sshd[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:51.875456 systemd-logind[2020]: New session 5 of user core. Feb 13 16:04:51.886092 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:04:52.032031 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:04:52.032732 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:52.047984 sudo[2356]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:52.071818 sshd[2352]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:52.080370 systemd-logind[2020]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:04:52.080954 systemd[1]: sshd@4-172.31.19.49:22-139.178.68.195:57306.service: Deactivated successfully. Feb 13 16:04:52.084379 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:04:52.088798 systemd-logind[2020]: Removed session 5. Feb 13 16:04:52.102073 systemd[1]: Started sshd@5-172.31.19.49:22-139.178.68.195:57316.service - OpenSSH per-connection server daemon (139.178.68.195:57316). Feb 13 16:04:52.294563 sshd[2361]: Accepted publickey for core from 139.178.68.195 port 57316 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:52.297118 sshd[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:52.304719 systemd-logind[2020]: New session 6 of user core. Feb 13 16:04:52.313008 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:04:52.421192 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:04:52.421990 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:52.428536 sudo[2366]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:52.439380 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 16:04:52.440106 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:52.468063 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:52.474474 auditctl[2369]: No rules Feb 13 16:04:52.475576 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:04:52.476262 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:52.494971 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:52.544273 augenrules[2388]: No rules Feb 13 16:04:52.548839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:52.553256 sudo[2365]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:52.578134 sshd[2361]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:52.588141 systemd[1]: sshd@5-172.31.19.49:22-139.178.68.195:57316.service: Deactivated successfully. Feb 13 16:04:52.595619 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:04:52.597636 systemd-logind[2020]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:04:52.611102 systemd[1]: Started sshd@6-172.31.19.49:22-139.178.68.195:57328.service - OpenSSH per-connection server daemon (139.178.68.195:57328). Feb 13 16:04:52.612785 systemd-logind[2020]: Removed session 6. Feb 13 16:04:52.800023 sshd[2397]: Accepted publickey for core from 139.178.68.195 port 57328 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:52.802997 sshd[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:52.815012 systemd-logind[2020]: New session 7 of user core. Feb 13 16:04:52.820458 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:04:52.931897 sudo[2401]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:04:52.933600 sudo[2401]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:53.607610 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:04:53.624255 (dockerd)[2417]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:04:54.189062 dockerd[2417]: time="2025-02-13T16:04:54.188909131Z" level=info msg="Starting up" Feb 13 16:04:56.090328 systemd[1]: var-lib-docker-metacopy\x2dcheck1888535870-merged.mount: Deactivated successfully. Feb 13 16:04:56.100488 dockerd[2417]: time="2025-02-13T16:04:56.100374033Z" level=info msg="Loading containers: start." Feb 13 16:04:56.327511 kernel: Initializing XFRM netlink socket Feb 13 16:04:56.404885 (udev-worker)[2441]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:56.502605 systemd-networkd[1607]: docker0: Link UP Feb 13 16:04:56.530012 dockerd[2417]: time="2025-02-13T16:04:56.529961627Z" level=info msg="Loading containers: done." Feb 13 16:04:56.554655 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1902817079-merged.mount: Deactivated successfully. Feb 13 16:04:56.556621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:04:56.565792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:04:56.574247 dockerd[2417]: time="2025-02-13T16:04:56.572171615Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:04:56.574827 dockerd[2417]: time="2025-02-13T16:04:56.572326127Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 16:04:56.575208 dockerd[2417]: time="2025-02-13T16:04:56.575036891Z" level=info msg="Daemon has completed initialization" Feb 13 16:04:56.749610 dockerd[2417]: time="2025-02-13T16:04:56.749102928Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:04:56.752103 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:04:57.642905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:04:57.663311 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:04:57.761650 kubelet[2569]: E0213 16:04:57.761529 2569 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:04:57.771582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:04:57.771989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:04:58.357697 containerd[2047]: time="2025-02-13T16:04:58.357618048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 16:04:59.057945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941199162.mount: Deactivated successfully. Feb 13 16:05:00.867461 containerd[2047]: time="2025-02-13T16:05:00.865488532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:00.869277 containerd[2047]: time="2025-02-13T16:05:00.869209132Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205861" Feb 13 16:05:00.871634 containerd[2047]: time="2025-02-13T16:05:00.871556968Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:00.877817 containerd[2047]: time="2025-02-13T16:05:00.877716604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:00.880317 containerd[2047]: time="2025-02-13T16:05:00.880025548Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.522336928s" Feb 13 16:05:00.880317 containerd[2047]: time="2025-02-13T16:05:00.880086124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 16:05:00.919450 containerd[2047]: time="2025-02-13T16:05:00.919126577Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 16:05:02.755068 containerd[2047]: time="2025-02-13T16:05:02.754715454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:02.757196 containerd[2047]: time="2025-02-13T16:05:02.757107114Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383091" Feb 13 16:05:02.758686 containerd[2047]: time="2025-02-13T16:05:02.758621598Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:02.765456 containerd[2047]: time="2025-02-13T16:05:02.765324834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:02.768394 containerd[2047]: time="2025-02-13T16:05:02.768100866Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.848906513s" Feb 13 16:05:02.768394 containerd[2047]: time="2025-02-13T16:05:02.768192114Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 16:05:02.813195 containerd[2047]: time="2025-02-13T16:05:02.813021582Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 16:05:04.007925 containerd[2047]: time="2025-02-13T16:05:04.007350040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:04.011096 containerd[2047]: time="2025-02-13T16:05:04.010979680Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766980" Feb 13 16:05:04.013461 containerd[2047]: time="2025-02-13T16:05:04.011404804Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:04.020866 containerd[2047]: time="2025-02-13T16:05:04.020792548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:04.023609 containerd[2047]: time="2025-02-13T16:05:04.023539576Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.210459002s" Feb 13 16:05:04.023743 containerd[2047]: time="2025-02-13T16:05:04.023604868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 16:05:04.063990 containerd[2047]: time="2025-02-13T16:05:04.063934132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 16:05:05.488992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731330887.mount: Deactivated successfully. Feb 13 16:05:06.184492 containerd[2047]: time="2025-02-13T16:05:06.184132375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:06.185997 containerd[2047]: time="2025-02-13T16:05:06.185931343Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273375" Feb 13 16:05:06.188341 containerd[2047]: time="2025-02-13T16:05:06.188223895Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:06.192637 containerd[2047]: time="2025-02-13T16:05:06.192530251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:06.194443 containerd[2047]: time="2025-02-13T16:05:06.193976431Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 2.129966759s" Feb 13 16:05:06.194443 containerd[2047]: time="2025-02-13T16:05:06.194107843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 16:05:06.241816 containerd[2047]: time="2025-02-13T16:05:06.241708555Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:05:06.886026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172321168.mount: Deactivated successfully. Feb 13 16:05:08.003957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 16:05:08.018880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:09.963806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:09.980057 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:10.069657 kubelet[2725]: E0213 16:05:10.069564 2725 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:10.075051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:10.075934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:10.859109 containerd[2047]: time="2025-02-13T16:05:10.859045178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:10.890975 containerd[2047]: time="2025-02-13T16:05:10.890899370Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 16:05:10.934302 containerd[2047]: time="2025-02-13T16:05:10.934208186Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.017060 containerd[2047]: time="2025-02-13T16:05:11.016941251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.020202 containerd[2047]: time="2025-02-13T16:05:11.019463015Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 4.777695804s" Feb 13 16:05:11.020202 containerd[2047]: time="2025-02-13T16:05:11.019529183Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 16:05:11.057888 containerd[2047]: time="2025-02-13T16:05:11.057841751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 16:05:11.760086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483353105.mount: Deactivated successfully. Feb 13 16:05:11.770847 containerd[2047]: time="2025-02-13T16:05:11.770391723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.772050 containerd[2047]: time="2025-02-13T16:05:11.772002759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 16:05:11.773763 containerd[2047]: time="2025-02-13T16:05:11.773679699Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.779935 containerd[2047]: time="2025-02-13T16:05:11.779820819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.782682 containerd[2047]: time="2025-02-13T16:05:11.781819551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 723.739384ms" Feb 13 16:05:11.782682 containerd[2047]: time="2025-02-13T16:05:11.781898619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 16:05:11.823259 containerd[2047]: time="2025-02-13T16:05:11.822902919Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 16:05:12.551842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698393379.mount: Deactivated successfully. Feb 13 16:05:15.026277 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 16:05:15.243459 containerd[2047]: time="2025-02-13T16:05:15.241640836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:15.244635 containerd[2047]: time="2025-02-13T16:05:15.244574644Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Feb 13 16:05:15.246363 containerd[2047]: time="2025-02-13T16:05:15.246306040Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:15.252591 containerd[2047]: time="2025-02-13T16:05:15.252527032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:15.255376 containerd[2047]: time="2025-02-13T16:05:15.255317536Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.432356333s" Feb 13 16:05:15.255632 containerd[2047]: time="2025-02-13T16:05:15.255596788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 16:05:20.254048 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 16:05:20.264302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:21.188061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:21.197645 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:21.293507 kubelet[2871]: E0213 16:05:21.293358 2871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:21.303018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:21.303501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:24.004900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:24.024812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:24.057713 systemd[1]: Reloading requested from client PID 2884 ('systemctl') (unit session-7.scope)... Feb 13 16:05:24.057748 systemd[1]: Reloading... Feb 13 16:05:24.268485 zram_generator::config[2929]: No configuration found. Feb 13 16:05:24.569030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:24.750882 systemd[1]: Reloading finished in 692 ms. Feb 13 16:05:24.853741 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:05:24.854101 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:05:24.854962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:24.863178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:25.570898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:25.593257 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:05:25.689856 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:25.689856 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:05:25.689856 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:25.690492 kubelet[2996]: I0213 16:05:25.689966 2996 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:05:26.473548 kubelet[2996]: I0213 16:05:26.473352 2996 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:05:26.473548 kubelet[2996]: I0213 16:05:26.473405 2996 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:05:26.473865 kubelet[2996]: I0213 16:05:26.473821 2996 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:05:26.508248 kubelet[2996]: I0213 16:05:26.507602 2996 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:05:26.508248 kubelet[2996]: E0213 16:05:26.508162 2996 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.529166 kubelet[2996]: I0213 16:05:26.529101 2996 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:05:26.529957 kubelet[2996]: I0213 16:05:26.529923 2996 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:05:26.530292 kubelet[2996]: I0213 16:05:26.530257 2996 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:05:26.530500 kubelet[2996]: I0213 16:05:26.530306 2996 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:05:26.530500 kubelet[2996]: I0213 16:05:26.530328 2996 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:05:26.533257 kubelet[2996]: I0213 16:05:26.533208 2996 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:26.538877 kubelet[2996]: I0213 16:05:26.538508 2996 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:05:26.538877 kubelet[2996]: I0213 16:05:26.538563 2996 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:05:26.538877 kubelet[2996]: I0213 16:05:26.538610 2996 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:05:26.538877 kubelet[2996]: I0213 16:05:26.538645 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:05:26.541995 kubelet[2996]: W0213 16:05:26.541910 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-49&limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.542864 kubelet[2996]: E0213 16:05:26.542200 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-49&limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.542864 kubelet[2996]: W0213 16:05:26.542742 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.542864 kubelet[2996]: E0213 16:05:26.542821 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.543334 kubelet[2996]: I0213 16:05:26.543305 2996 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:05:26.543989 kubelet[2996]: I0213 16:05:26.543958 2996 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:05:26.547069 kubelet[2996]: W0213 16:05:26.547028 2996 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:05:26.548617 kubelet[2996]: I0213 16:05:26.548383 2996 server.go:1256] "Started kubelet" Feb 13 16:05:26.552470 kubelet[2996]: I0213 16:05:26.552149 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:05:26.559500 kubelet[2996]: E0213 16:05:26.559455 2996 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.49:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-49.1823d0223bf21c00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-49,UID:ip-172-31-19-49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-49,},FirstTimestamp:2025-02-13 16:05:26.54833152 +0000 UTC m=+0.947021274,LastTimestamp:2025-02-13 16:05:26.54833152 +0000 UTC m=+0.947021274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-49,}" Feb 13 16:05:26.560468 kubelet[2996]: I0213 16:05:26.560199 2996 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:05:26.563409 kubelet[2996]: I0213 16:05:26.563363 2996 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:05:26.564623 kubelet[2996]: I0213 16:05:26.564181 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:05:26.564792 kubelet[2996]: I0213 16:05:26.564708 2996 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:05:26.567726 kubelet[2996]: I0213 16:05:26.567664 2996 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:05:26.571247 kubelet[2996]: I0213 16:05:26.567862 2996 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:05:26.572393 kubelet[2996]: I0213 16:05:26.572298 2996 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:05:26.572393 kubelet[2996]: W0213 16:05:26.569787 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.572393 kubelet[2996]: E0213 16:05:26.572399 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.572393 kubelet[2996]: E0213 16:05:26.569965 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-49?timeout=10s\": dial tcp 172.31.19.49:6443: connect: connection refused" interval="200ms" Feb 13 16:05:26.572393 kubelet[2996]: I0213 16:05:26.570879 2996 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:05:26.573868 kubelet[2996]: I0213 16:05:26.572608 2996 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:05:26.577340 kubelet[2996]: E0213 16:05:26.577290 2996 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:05:26.581874 kubelet[2996]: I0213 16:05:26.581839 2996 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:05:26.606470 kubelet[2996]: I0213 16:05:26.604554 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:05:26.619105 kubelet[2996]: I0213 16:05:26.619046 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:05:26.619105 kubelet[2996]: I0213 16:05:26.619102 2996 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:05:26.619272 kubelet[2996]: I0213 16:05:26.619133 2996 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:05:26.619272 kubelet[2996]: E0213 16:05:26.619248 2996 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:05:26.632939 kubelet[2996]: W0213 16:05:26.632854 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.633217 kubelet[2996]: E0213 16:05:26.633188 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:26.646051 kubelet[2996]: I0213 16:05:26.645959 2996 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:05:26.646051 kubelet[2996]: I0213 16:05:26.646032 2996 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:05:26.646263 kubelet[2996]: I0213 16:05:26.646070 2996 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:26.649281 kubelet[2996]: I0213 16:05:26.649197 2996 policy_none.go:49] "None policy: Start" Feb 13 16:05:26.650841 kubelet[2996]: I0213 16:05:26.650786 2996 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:05:26.651098 kubelet[2996]: I0213 16:05:26.650896 2996 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:05:26.660872 kubelet[2996]: I0213 16:05:26.660787 2996 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:05:26.661530 kubelet[2996]: I0213 16:05:26.661478 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:05:26.673830 kubelet[2996]: I0213 16:05:26.673565 2996 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-49" Feb 13 16:05:26.673830 kubelet[2996]: E0213 16:05:26.673573 2996 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-49\" not found" Feb 13 16:05:26.674662 kubelet[2996]: E0213 16:05:26.674630 2996 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.49:6443/api/v1/nodes\": dial tcp 172.31.19.49:6443: connect: connection refused" node="ip-172-31-19-49" Feb 13 16:05:26.719947 kubelet[2996]: I0213 16:05:26.719863 2996 topology_manager.go:215] "Topology Admit Handler" podUID="675033b572c084e0a07cbc98060ec4cf" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-49" Feb 13 16:05:26.722814 kubelet[2996]: I0213 16:05:26.722411 2996 topology_manager.go:215] "Topology Admit Handler" podUID="77df105a347320240770ecf38f7e8363" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:26.725948 kubelet[2996]: I0213 16:05:26.725789 2996 topology_manager.go:215] "Topology Admit Handler" podUID="66df95165514af213252cbc855e2ec16" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-49" Feb 13 16:05:26.773814 kubelet[2996]: E0213 16:05:26.773578 2996 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.49:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-49.1823d0223bf21c00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-49,UID:ip-172-31-19-49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-49,},FirstTimestamp:2025-02-13 16:05:26.54833152 +0000 UTC m=+0.947021274,LastTimestamp:2025-02-13 16:05:26.54833152 +0000 UTC m=+0.947021274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-49,}" Feb 13 16:05:26.774079 kubelet[2996]: E0213 16:05:26.773881 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-49?timeout=10s\": dial tcp 172.31.19.49:6443: connect: connection refused" interval="400ms" Feb 13 16:05:26.873827 kubelet[2996]: I0213 16:05:26.873652 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/675033b572c084e0a07cbc98060ec4cf-ca-certs\") pod \"kube-apiserver-ip-172-31-19-49\" (UID: \"675033b572c084e0a07cbc98060ec4cf\") " pod="kube-system/kube-apiserver-ip-172-31-19-49" Feb 13 16:05:26.873827 kubelet[2996]: I0213 16:05:26.873759 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/675033b572c084e0a07cbc98060ec4cf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-49\" (UID: \"675033b572c084e0a07cbc98060ec4cf\") " pod="kube-system/kube-apiserver-ip-172-31-19-49" Feb 13 16:05:26.874104 kubelet[2996]: I0213 16:05:26.873886 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:26.874104 kubelet[2996]: I0213 16:05:26.874029 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:26.874278 kubelet[2996]: I0213 16:05:26.874103 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/675033b572c084e0a07cbc98060ec4cf-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-49\" (UID: \"675033b572c084e0a07cbc98060ec4cf\") " pod="kube-system/kube-apiserver-ip-172-31-19-49" Feb 13 16:05:26.874278 kubelet[2996]: I0213 16:05:26.874206 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:26.874278 kubelet[2996]: I0213 16:05:26.874274 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:26.874511 kubelet[2996]: I0213 16:05:26.874334 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:26.874511 kubelet[2996]: I0213 16:05:26.874401 2996 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66df95165514af213252cbc855e2ec16-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-49\" (UID: \"66df95165514af213252cbc855e2ec16\") " pod="kube-system/kube-scheduler-ip-172-31-19-49" Feb 13 16:05:26.878834 kubelet[2996]: I0213 16:05:26.878657 2996 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-49" Feb 13 16:05:26.879563 kubelet[2996]: E0213 16:05:26.879509 2996 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.49:6443/api/v1/nodes\": dial tcp 172.31.19.49:6443: connect: connection refused" node="ip-172-31-19-49" Feb 13 16:05:27.032781 containerd[2047]: time="2025-02-13T16:05:27.032395286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-49,Uid:675033b572c084e0a07cbc98060ec4cf,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:27.043703 containerd[2047]: time="2025-02-13T16:05:27.043561754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-49,Uid:77df105a347320240770ecf38f7e8363,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:27.046736 containerd[2047]: time="2025-02-13T16:05:27.046182182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-49,Uid:66df95165514af213252cbc855e2ec16,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:27.174866 kubelet[2996]: E0213 16:05:27.174813 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-49?timeout=10s\": dial tcp 172.31.19.49:6443: connect: connection refused" interval="800ms" Feb 13 16:05:27.282902 kubelet[2996]: I0213 16:05:27.282600 2996 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-49" Feb 13 16:05:27.283601 kubelet[2996]: E0213 16:05:27.283560 2996 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.49:6443/api/v1/nodes\": dial tcp 172.31.19.49:6443: connect: connection refused" node="ip-172-31-19-49" Feb 13 16:05:27.631458 kubelet[2996]: W0213 16:05:27.631250 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:27.631458 kubelet[2996]: E0213 16:05:27.631342 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:27.725286 kubelet[2996]: W0213 16:05:27.725163 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-49&limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:27.725286 kubelet[2996]: E0213 16:05:27.725252 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-49&limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:27.742525 kubelet[2996]: W0213 16:05:27.742403 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:27.742525 kubelet[2996]: E0213 16:05:27.742531 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:27.768005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623894729.mount: Deactivated successfully. Feb 13 16:05:27.779831 containerd[2047]: time="2025-02-13T16:05:27.779754198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:27.781506 containerd[2047]: time="2025-02-13T16:05:27.781453614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 16:05:27.784441 containerd[2047]: time="2025-02-13T16:05:27.783665838Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:27.787630 containerd[2047]: time="2025-02-13T16:05:27.787585746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:05:27.788354 containerd[2047]: time="2025-02-13T16:05:27.788315538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:27.791295 containerd[2047]: time="2025-02-13T16:05:27.791220582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:05:27.793168 containerd[2047]: time="2025-02-13T16:05:27.793120002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:27.799043 containerd[2047]: time="2025-02-13T16:05:27.798973410Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 766.360168ms" Feb 13 16:05:27.802944 containerd[2047]: time="2025-02-13T16:05:27.802837602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:27.805820 containerd[2047]: time="2025-02-13T16:05:27.805458882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 759.158272ms" Feb 13 16:05:27.809016 containerd[2047]: time="2025-02-13T16:05:27.808752930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 765.066868ms" Feb 13 16:05:27.977744 kubelet[2996]: E0213 16:05:27.976543 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-49?timeout=10s\": dial tcp 172.31.19.49:6443: connect: connection refused" interval="1.6s" Feb 13 16:05:27.988637 kubelet[2996]: W0213 16:05:27.988464 2996 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:27.988637 kubelet[2996]: E0213 16:05:27.988587 2996 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:28.087916 kubelet[2996]: I0213 16:05:28.087753 2996 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-49" Feb 13 16:05:28.088484 kubelet[2996]: E0213 16:05:28.088439 2996 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.49:6443/api/v1/nodes\": dial tcp 172.31.19.49:6443: connect: connection refused" node="ip-172-31-19-49" Feb 13 16:05:28.129979 containerd[2047]: time="2025-02-13T16:05:28.128493160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:28.129979 containerd[2047]: time="2025-02-13T16:05:28.128606488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:28.129979 containerd[2047]: time="2025-02-13T16:05:28.128644744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.129979 containerd[2047]: time="2025-02-13T16:05:28.128865316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.132647 containerd[2047]: time="2025-02-13T16:05:28.131769508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:28.132647 containerd[2047]: time="2025-02-13T16:05:28.131895448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:28.132986 containerd[2047]: time="2025-02-13T16:05:28.132775900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.133119 containerd[2047]: time="2025-02-13T16:05:28.133055140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.136163 containerd[2047]: time="2025-02-13T16:05:28.133784860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:28.136163 containerd[2047]: time="2025-02-13T16:05:28.133883080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:28.136163 containerd[2047]: time="2025-02-13T16:05:28.134324548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.136163 containerd[2047]: time="2025-02-13T16:05:28.134582740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:28.274532 containerd[2047]: time="2025-02-13T16:05:28.274346513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-49,Uid:77df105a347320240770ecf38f7e8363,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce45e6c2fcb38e3c800a4ee1065816fe553c5aa12814c2373483fbae67121947\"" Feb 13 16:05:28.287950 containerd[2047]: time="2025-02-13T16:05:28.287626913Z" level=info msg="CreateContainer within sandbox \"ce45e6c2fcb38e3c800a4ee1065816fe553c5aa12814c2373483fbae67121947\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:05:28.315638 containerd[2047]: time="2025-02-13T16:05:28.315459233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-49,Uid:675033b572c084e0a07cbc98060ec4cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d328d109ef800ff12c08553a62a1e33d0eda51373805c843f4d3c72751acaa2d\"" Feb 13 16:05:28.323199 containerd[2047]: time="2025-02-13T16:05:28.323116793Z" level=info msg="CreateContainer within sandbox \"d328d109ef800ff12c08553a62a1e33d0eda51373805c843f4d3c72751acaa2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:05:28.334463 containerd[2047]: time="2025-02-13T16:05:28.334140845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-49,Uid:66df95165514af213252cbc855e2ec16,Namespace:kube-system,Attempt:0,} returns sandbox id \"606b6bd0a601a28bad35e1fdb78db28e839628ed201f54df1c004cd47783f45d\"" Feb 13 16:05:28.370432 containerd[2047]: time="2025-02-13T16:05:28.370286201Z" level=info msg="CreateContainer within sandbox \"606b6bd0a601a28bad35e1fdb78db28e839628ed201f54df1c004cd47783f45d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:05:28.372261 containerd[2047]: time="2025-02-13T16:05:28.372070493Z" level=info msg="CreateContainer within sandbox \"ce45e6c2fcb38e3c800a4ee1065816fe553c5aa12814c2373483fbae67121947\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5132bc7a51ecb85f198874227a6911e82981d6bf31c94d8eb5269819a8856517\"" Feb 13 16:05:28.375475 containerd[2047]: time="2025-02-13T16:05:28.373449809Z" level=info msg="StartContainer for \"5132bc7a51ecb85f198874227a6911e82981d6bf31c94d8eb5269819a8856517\"" Feb 13 16:05:28.391914 containerd[2047]: time="2025-02-13T16:05:28.391842005Z" level=info msg="CreateContainer within sandbox \"d328d109ef800ff12c08553a62a1e33d0eda51373805c843f4d3c72751acaa2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"897bdade7fb5b98c5a476594b15ef7e96d5f319c50f482afb0b005aa79e69b4c\"" Feb 13 16:05:28.393399 containerd[2047]: time="2025-02-13T16:05:28.393324209Z" level=info msg="StartContainer for \"897bdade7fb5b98c5a476594b15ef7e96d5f319c50f482afb0b005aa79e69b4c\"" Feb 13 16:05:28.414889 containerd[2047]: time="2025-02-13T16:05:28.414823313Z" level=info msg="CreateContainer within sandbox \"606b6bd0a601a28bad35e1fdb78db28e839628ed201f54df1c004cd47783f45d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"584d9e592ea4722caf947d0201ca74e69b9a6bb6cc1d9f94573fd47d20262458\"" Feb 13 16:05:28.416105 containerd[2047]: time="2025-02-13T16:05:28.415945493Z" level=info msg="StartContainer for \"584d9e592ea4722caf947d0201ca74e69b9a6bb6cc1d9f94573fd47d20262458\"" Feb 13 16:05:28.519249 kubelet[2996]: E0213 16:05:28.519196 2996 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.49:6443: connect: connection refused Feb 13 16:05:28.612763 update_engine[2024]: I20250213 16:05:28.611773 2024 update_attempter.cc:509] Updating boot flags... Feb 13 16:05:28.643471 containerd[2047]: time="2025-02-13T16:05:28.638971002Z" level=info msg="StartContainer for \"897bdade7fb5b98c5a476594b15ef7e96d5f319c50f482afb0b005aa79e69b4c\" returns successfully" Feb 13 16:05:28.678590 containerd[2047]: time="2025-02-13T16:05:28.678482551Z" level=info msg="StartContainer for \"5132bc7a51ecb85f198874227a6911e82981d6bf31c94d8eb5269819a8856517\" returns successfully" Feb 13 16:05:28.809465 containerd[2047]: time="2025-02-13T16:05:28.807866119Z" level=info msg="StartContainer for \"584d9e592ea4722caf947d0201ca74e69b9a6bb6cc1d9f94573fd47d20262458\" returns successfully" Feb 13 16:05:28.934450 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3275) Feb 13 16:05:29.700486 kubelet[2996]: I0213 16:05:29.700211 2996 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-49" Feb 13 16:05:29.738548 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3278) Feb 13 16:05:32.545708 kubelet[2996]: I0213 16:05:32.545664 2996 apiserver.go:52] "Watching apiserver" Feb 13 16:05:32.564470 kubelet[2996]: E0213 16:05:32.562222 2996 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-49\" not found" node="ip-172-31-19-49" Feb 13 16:05:32.572754 kubelet[2996]: I0213 16:05:32.572671 2996 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:05:32.606526 kubelet[2996]: I0213 16:05:32.605835 2996 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-49" Feb 13 16:05:32.814251 kubelet[2996]: E0213 16:05:32.814074 2996 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-49\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-49" Feb 13 16:05:35.820280 systemd[1]: Reloading requested from client PID 3451 ('systemctl') (unit session-7.scope)... Feb 13 16:05:35.820323 systemd[1]: Reloading... Feb 13 16:05:36.071522 zram_generator::config[3497]: No configuration found. Feb 13 16:05:36.384537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:36.585079 systemd[1]: Reloading finished in 762 ms. Feb 13 16:05:36.661597 kubelet[2996]: I0213 16:05:36.661374 2996 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:05:36.661819 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:36.687142 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:05:36.687859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:36.700628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:37.159879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:37.184263 (kubelet)[3561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:05:37.329117 kubelet[3561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:37.331177 kubelet[3561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:05:37.331177 kubelet[3561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:37.331177 kubelet[3561]: I0213 16:05:37.330014 3561 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:05:37.340253 kubelet[3561]: I0213 16:05:37.340198 3561 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:05:37.340560 kubelet[3561]: I0213 16:05:37.340532 3561 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:05:37.341157 kubelet[3561]: I0213 16:05:37.341119 3561 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:05:37.344507 kubelet[3561]: I0213 16:05:37.344467 3561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:05:37.350586 kubelet[3561]: I0213 16:05:37.350541 3561 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:05:37.384884 kubelet[3561]: I0213 16:05:37.384836 3561 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:05:37.386922 kubelet[3561]: I0213 16:05:37.386893 3561 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:05:37.387773 kubelet[3561]: I0213 16:05:37.387319 3561 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:05:37.387773 kubelet[3561]: I0213 16:05:37.387400 3561 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:05:37.387773 kubelet[3561]: I0213 16:05:37.387486 3561 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:05:37.387773 kubelet[3561]: I0213 16:05:37.387549 3561 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:37.387773 kubelet[3561]: I0213 16:05:37.387737 3561 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:05:37.387773 kubelet[3561]: I0213 16:05:37.387765 3561 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:05:37.388845 kubelet[3561]: I0213 16:05:37.387806 3561 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:05:37.388845 kubelet[3561]: I0213 16:05:37.387829 3561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:05:37.393470 kubelet[3561]: I0213 16:05:37.391519 3561 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:05:37.393470 kubelet[3561]: I0213 16:05:37.391872 3561 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:05:37.393470 kubelet[3561]: I0213 16:05:37.393452 3561 server.go:1256] "Started kubelet" Feb 13 16:05:37.401939 kubelet[3561]: I0213 16:05:37.400094 3561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:05:37.415744 kubelet[3561]: I0213 16:05:37.415566 3561 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:05:37.417151 kubelet[3561]: I0213 16:05:37.417096 3561 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:05:37.425445 kubelet[3561]: I0213 16:05:37.424969 3561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:05:37.425445 kubelet[3561]: I0213 16:05:37.425312 3561 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:05:37.431211 kubelet[3561]: I0213 16:05:37.430179 3561 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:05:37.431211 kubelet[3561]: I0213 16:05:37.430867 3561 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:05:37.431211 kubelet[3561]: I0213 16:05:37.431138 3561 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:05:37.495854 kubelet[3561]: E0213 16:05:37.495736 3561 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:05:37.500370 kubelet[3561]: I0213 16:05:37.498578 3561 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:05:37.500370 kubelet[3561]: I0213 16:05:37.498606 3561 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:05:37.500370 kubelet[3561]: I0213 16:05:37.498740 3561 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:05:37.502228 kubelet[3561]: I0213 16:05:37.502151 3561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:05:37.507117 kubelet[3561]: I0213 16:05:37.507038 3561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:05:37.507577 kubelet[3561]: I0213 16:05:37.507295 3561 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:05:37.507577 kubelet[3561]: I0213 16:05:37.507337 3561 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:05:37.507577 kubelet[3561]: E0213 16:05:37.507441 3561 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:05:37.564337 kubelet[3561]: I0213 16:05:37.563093 3561 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-49" Feb 13 16:05:37.594762 kubelet[3561]: I0213 16:05:37.594478 3561 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-49" Feb 13 16:05:37.594762 kubelet[3561]: I0213 16:05:37.594618 3561 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-49" Feb 13 16:05:37.607538 kubelet[3561]: E0213 16:05:37.607488 3561 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:05:37.702198 kubelet[3561]: I0213 16:05:37.700505 3561 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:05:37.702198 kubelet[3561]: I0213 16:05:37.700583 3561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:05:37.702198 kubelet[3561]: I0213 16:05:37.700642 3561 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:37.702198 kubelet[3561]: I0213 16:05:37.701180 3561 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:05:37.702198 kubelet[3561]: I0213 16:05:37.701266 3561 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:05:37.702198 kubelet[3561]: I0213 16:05:37.701286 3561 policy_none.go:49] "None policy: Start" Feb 13 16:05:37.707341 kubelet[3561]: I0213 16:05:37.707253 3561 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:05:37.707341 kubelet[3561]: I0213 16:05:37.707367 3561 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:05:37.709230 kubelet[3561]: I0213 16:05:37.709118 3561 state_mem.go:75] "Updated machine memory state" Feb 13 16:05:37.719061 kubelet[3561]: I0213 16:05:37.718871 3561 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:05:37.722904 kubelet[3561]: I0213 16:05:37.722557 3561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:05:37.808885 kubelet[3561]: I0213 16:05:37.808666 3561 topology_manager.go:215] "Topology Admit Handler" podUID="66df95165514af213252cbc855e2ec16" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-49" Feb 13 16:05:37.808885 kubelet[3561]: I0213 16:05:37.808827 3561 topology_manager.go:215] "Topology Admit Handler" podUID="675033b572c084e0a07cbc98060ec4cf" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-49" Feb 13 16:05:37.811946 kubelet[3561]: I0213 16:05:37.811859 3561 topology_manager.go:215] "Topology Admit Handler" podUID="77df105a347320240770ecf38f7e8363" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:37.830435 kubelet[3561]: E0213 16:05:37.830348 3561 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-19-49\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:37.831935 kubelet[3561]: I0213 16:05:37.831852 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/675033b572c084e0a07cbc98060ec4cf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-49\" (UID: \"675033b572c084e0a07cbc98060ec4cf\") " pod="kube-system/kube-apiserver-ip-172-31-19-49" Feb 13 16:05:37.832081 kubelet[3561]: I0213 16:05:37.831946 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:37.832081 kubelet[3561]: I0213 16:05:37.831997 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:37.832207 kubelet[3561]: I0213 16:05:37.832096 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:37.832207 kubelet[3561]: I0213 16:05:37.832147 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66df95165514af213252cbc855e2ec16-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-49\" (UID: \"66df95165514af213252cbc855e2ec16\") " pod="kube-system/kube-scheduler-ip-172-31-19-49" Feb 13 16:05:37.832207 kubelet[3561]: I0213 16:05:37.832191 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/675033b572c084e0a07cbc98060ec4cf-ca-certs\") pod \"kube-apiserver-ip-172-31-19-49\" (UID: \"675033b572c084e0a07cbc98060ec4cf\") " pod="kube-system/kube-apiserver-ip-172-31-19-49" Feb 13 16:05:37.832364 kubelet[3561]: I0213 16:05:37.832234 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/675033b572c084e0a07cbc98060ec4cf-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-49\" (UID: \"675033b572c084e0a07cbc98060ec4cf\") " pod="kube-system/kube-apiserver-ip-172-31-19-49" Feb 13 16:05:37.832364 kubelet[3561]: I0213 16:05:37.832277 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:37.832364 kubelet[3561]: I0213 16:05:37.832324 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/77df105a347320240770ecf38f7e8363-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-49\" (UID: \"77df105a347320240770ecf38f7e8363\") " pod="kube-system/kube-controller-manager-ip-172-31-19-49" Feb 13 16:05:38.390665 kubelet[3561]: I0213 16:05:38.390260 3561 apiserver.go:52] "Watching apiserver" Feb 13 16:05:38.431593 kubelet[3561]: I0213 16:05:38.431527 3561 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:05:38.630617 kubelet[3561]: E0213 16:05:38.630552 3561 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-49\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-49" Feb 13 16:05:38.660236 kubelet[3561]: I0213 16:05:38.659876 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-49" podStartSLOduration=1.65981254 podStartE2EDuration="1.65981254s" podCreationTimestamp="2025-02-13 16:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:38.64331218 +0000 UTC m=+1.444924364" watchObservedRunningTime="2025-02-13 16:05:38.65981254 +0000 UTC m=+1.461424712" Feb 13 16:05:38.681647 kubelet[3561]: I0213 16:05:38.681542 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-49" podStartSLOduration=1.68148352 podStartE2EDuration="1.68148352s" podCreationTimestamp="2025-02-13 16:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:38.660040216 +0000 UTC m=+1.461652400" watchObservedRunningTime="2025-02-13 16:05:38.68148352 +0000 UTC m=+1.483095716" Feb 13 16:05:38.712785 kubelet[3561]: I0213 16:05:38.711119 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-49" podStartSLOduration=2.711056368 podStartE2EDuration="2.711056368s" podCreationTimestamp="2025-02-13 16:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:38.681972148 +0000 UTC m=+1.483584320" watchObservedRunningTime="2025-02-13 16:05:38.711056368 +0000 UTC m=+1.512668540" Feb 13 16:05:42.660332 sudo[2401]: pam_unix(sudo:session): session closed for user root Feb 13 16:05:42.684301 sshd[2397]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:42.690970 systemd[1]: sshd@6-172.31.19.49:22-139.178.68.195:57328.service: Deactivated successfully. Feb 13 16:05:42.701105 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:05:42.702528 systemd-logind[2020]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:05:42.707042 systemd-logind[2020]: Removed session 7. Feb 13 16:05:48.252896 kubelet[3561]: I0213 16:05:48.252772 3561 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:05:48.254069 containerd[2047]: time="2025-02-13T16:05:48.253968168Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:05:48.255387 kubelet[3561]: I0213 16:05:48.255346 3561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:05:48.905821 kubelet[3561]: I0213 16:05:48.903181 3561 topology_manager.go:215] "Topology Admit Handler" podUID="71604fd8-bfac-4e44-a771-4d16cdffd262" podNamespace="kube-system" podName="kube-proxy-nc9h9" Feb 13 16:05:49.012487 kubelet[3561]: I0213 16:05:49.012351 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71604fd8-bfac-4e44-a771-4d16cdffd262-xtables-lock\") pod \"kube-proxy-nc9h9\" (UID: \"71604fd8-bfac-4e44-a771-4d16cdffd262\") " pod="kube-system/kube-proxy-nc9h9" Feb 13 16:05:49.012487 kubelet[3561]: I0213 16:05:49.012495 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84988\" (UniqueName: \"kubernetes.io/projected/71604fd8-bfac-4e44-a771-4d16cdffd262-kube-api-access-84988\") pod \"kube-proxy-nc9h9\" (UID: \"71604fd8-bfac-4e44-a771-4d16cdffd262\") " pod="kube-system/kube-proxy-nc9h9" Feb 13 16:05:49.016545 kubelet[3561]: I0213 16:05:49.012560 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71604fd8-bfac-4e44-a771-4d16cdffd262-lib-modules\") pod \"kube-proxy-nc9h9\" (UID: \"71604fd8-bfac-4e44-a771-4d16cdffd262\") " pod="kube-system/kube-proxy-nc9h9" Feb 13 16:05:49.016545 kubelet[3561]: I0213 16:05:49.012626 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71604fd8-bfac-4e44-a771-4d16cdffd262-kube-proxy\") pod \"kube-proxy-nc9h9\" (UID: \"71604fd8-bfac-4e44-a771-4d16cdffd262\") " pod="kube-system/kube-proxy-nc9h9" Feb 13 16:05:49.232793 containerd[2047]: time="2025-02-13T16:05:49.232643773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nc9h9,Uid:71604fd8-bfac-4e44-a771-4d16cdffd262,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:49.333643 containerd[2047]: time="2025-02-13T16:05:49.333479281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:49.337436 containerd[2047]: time="2025-02-13T16:05:49.333708433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:49.337436 containerd[2047]: time="2025-02-13T16:05:49.333780541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:49.337436 containerd[2047]: time="2025-02-13T16:05:49.336824521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:49.439467 kubelet[3561]: I0213 16:05:49.439352 3561 topology_manager.go:215] "Topology Admit Handler" podUID="891ed05e-d96b-4731-9021-afda7b6ac0a9" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-wq8g7" Feb 13 16:05:49.491396 containerd[2047]: time="2025-02-13T16:05:49.491145194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nc9h9,Uid:71604fd8-bfac-4e44-a771-4d16cdffd262,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70f56c305ca51a18368e6105707acfea730afdbdab4a250006429fae22be7cb\"" Feb 13 16:05:49.500636 containerd[2047]: time="2025-02-13T16:05:49.500540810Z" level=info msg="CreateContainer within sandbox \"f70f56c305ca51a18368e6105707acfea730afdbdab4a250006429fae22be7cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:05:49.514329 kubelet[3561]: I0213 16:05:49.514194 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/891ed05e-d96b-4731-9021-afda7b6ac0a9-var-lib-calico\") pod \"tigera-operator-c7ccbd65-wq8g7\" (UID: \"891ed05e-d96b-4731-9021-afda7b6ac0a9\") " pod="tigera-operator/tigera-operator-c7ccbd65-wq8g7" Feb 13 16:05:49.514329 kubelet[3561]: I0213 16:05:49.514301 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wzgz\" (UniqueName: \"kubernetes.io/projected/891ed05e-d96b-4731-9021-afda7b6ac0a9-kube-api-access-2wzgz\") pod \"tigera-operator-c7ccbd65-wq8g7\" (UID: \"891ed05e-d96b-4731-9021-afda7b6ac0a9\") " pod="tigera-operator/tigera-operator-c7ccbd65-wq8g7" Feb 13 16:05:49.562978 containerd[2047]: time="2025-02-13T16:05:49.561282458Z" level=info msg="CreateContainer within sandbox \"f70f56c305ca51a18368e6105707acfea730afdbdab4a250006429fae22be7cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e659bd152e3b67ed97c6326db583691363c383464f49939131b30d27264b424\"" Feb 13 16:05:49.565190 containerd[2047]: time="2025-02-13T16:05:49.563504858Z" level=info msg="StartContainer for \"7e659bd152e3b67ed97c6326db583691363c383464f49939131b30d27264b424\"" Feb 13 16:05:49.700381 containerd[2047]: time="2025-02-13T16:05:49.700285467Z" level=info msg="StartContainer for \"7e659bd152e3b67ed97c6326db583691363c383464f49939131b30d27264b424\" returns successfully" Feb 13 16:05:49.754276 containerd[2047]: time="2025-02-13T16:05:49.754182483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-wq8g7,Uid:891ed05e-d96b-4731-9021-afda7b6ac0a9,Namespace:tigera-operator,Attempt:0,}" Feb 13 16:05:49.810100 containerd[2047]: time="2025-02-13T16:05:49.809312056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:49.810100 containerd[2047]: time="2025-02-13T16:05:49.809547676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:49.810100 containerd[2047]: time="2025-02-13T16:05:49.809614024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:49.810807 containerd[2047]: time="2025-02-13T16:05:49.810065596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:49.933702 containerd[2047]: time="2025-02-13T16:05:49.933639568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-wq8g7,Uid:891ed05e-d96b-4731-9021-afda7b6ac0a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"988100da5dd3181f854d766f832d73a3404b443ebfb7aec4305e5735a5c8c170\"" Feb 13 16:05:49.938964 containerd[2047]: time="2025-02-13T16:05:49.938896084Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 16:05:50.231132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111036266.mount: Deactivated successfully. Feb 13 16:05:52.667998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1100842213.mount: Deactivated successfully. Feb 13 16:05:53.415081 containerd[2047]: time="2025-02-13T16:05:53.415001897Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:53.416443 containerd[2047]: time="2025-02-13T16:05:53.416332217Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 16:05:53.420141 containerd[2047]: time="2025-02-13T16:05:53.419991065Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:53.427101 containerd[2047]: time="2025-02-13T16:05:53.426973193Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:53.429120 containerd[2047]: time="2025-02-13T16:05:53.429040781Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.490072745s" Feb 13 16:05:53.429120 containerd[2047]: time="2025-02-13T16:05:53.429113777Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 16:05:53.435767 containerd[2047]: time="2025-02-13T16:05:53.435673014Z" level=info msg="CreateContainer within sandbox \"988100da5dd3181f854d766f832d73a3404b443ebfb7aec4305e5735a5c8c170\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 16:05:53.461758 containerd[2047]: time="2025-02-13T16:05:53.461317338Z" level=info msg="CreateContainer within sandbox \"988100da5dd3181f854d766f832d73a3404b443ebfb7aec4305e5735a5c8c170\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4f4a9e18fa996254c164b078d13cfa4e4615e0c86e9c004ee4fefd74ea68472f\"" Feb 13 16:05:53.462932 containerd[2047]: time="2025-02-13T16:05:53.462820710Z" level=info msg="StartContainer for \"4f4a9e18fa996254c164b078d13cfa4e4615e0c86e9c004ee4fefd74ea68472f\"" Feb 13 16:05:53.619025 containerd[2047]: time="2025-02-13T16:05:53.618876894Z" level=info msg="StartContainer for \"4f4a9e18fa996254c164b078d13cfa4e4615e0c86e9c004ee4fefd74ea68472f\" returns successfully" Feb 13 16:05:53.713937 kubelet[3561]: I0213 16:05:53.710962 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nc9h9" podStartSLOduration=5.710889379 podStartE2EDuration="5.710889379s" podCreationTimestamp="2025-02-13 16:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:50.69365884 +0000 UTC m=+13.495271024" watchObservedRunningTime="2025-02-13 16:05:53.710889379 +0000 UTC m=+16.512501551" Feb 13 16:05:59.874484 kubelet[3561]: I0213 16:05:59.871518 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-wq8g7" podStartSLOduration=7.37799054 podStartE2EDuration="10.871405945s" podCreationTimestamp="2025-02-13 16:05:49 +0000 UTC" firstStartedPulling="2025-02-13 16:05:49.936139264 +0000 UTC m=+12.737751436" lastFinishedPulling="2025-02-13 16:05:53.429554657 +0000 UTC m=+16.231166841" observedRunningTime="2025-02-13 16:05:53.712023595 +0000 UTC m=+16.513635779" watchObservedRunningTime="2025-02-13 16:05:59.871405945 +0000 UTC m=+22.673018117" Feb 13 16:05:59.874484 kubelet[3561]: I0213 16:05:59.871988 3561 topology_manager.go:215] "Topology Admit Handler" podUID="265b878b-91ee-47ec-b9f7-4626790fe9e8" podNamespace="calico-system" podName="calico-typha-8487945587-9k5f7" Feb 13 16:05:59.895948 kubelet[3561]: I0213 16:05:59.895860 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265b878b-91ee-47ec-b9f7-4626790fe9e8-tigera-ca-bundle\") pod \"calico-typha-8487945587-9k5f7\" (UID: \"265b878b-91ee-47ec-b9f7-4626790fe9e8\") " pod="calico-system/calico-typha-8487945587-9k5f7" Feb 13 16:05:59.896721 kubelet[3561]: I0213 16:05:59.896295 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/265b878b-91ee-47ec-b9f7-4626790fe9e8-typha-certs\") pod \"calico-typha-8487945587-9k5f7\" (UID: \"265b878b-91ee-47ec-b9f7-4626790fe9e8\") " pod="calico-system/calico-typha-8487945587-9k5f7" Feb 13 16:05:59.896721 kubelet[3561]: I0213 16:05:59.896523 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlf2\" (UniqueName: \"kubernetes.io/projected/265b878b-91ee-47ec-b9f7-4626790fe9e8-kube-api-access-8wlf2\") pod \"calico-typha-8487945587-9k5f7\" (UID: \"265b878b-91ee-47ec-b9f7-4626790fe9e8\") " pod="calico-system/calico-typha-8487945587-9k5f7" Feb 13 16:06:00.154834 kubelet[3561]: I0213 16:06:00.153697 3561 topology_manager.go:215] "Topology Admit Handler" podUID="48cad1cd-183b-475c-b2c5-0d5131dfaa36" podNamespace="calico-system" podName="calico-node-d6lrl" Feb 13 16:06:00.200082 kubelet[3561]: I0213 16:06:00.199202 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-flexvol-driver-host\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200082 kubelet[3561]: I0213 16:06:00.199299 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-lib-modules\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200082 kubelet[3561]: I0213 16:06:00.199352 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-policysync\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200082 kubelet[3561]: I0213 16:06:00.199396 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-var-lib-calico\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200082 kubelet[3561]: I0213 16:06:00.199766 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-var-run-calico\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200719 kubelet[3561]: I0213 16:06:00.199946 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-cni-log-dir\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200719 kubelet[3561]: I0213 16:06:00.200073 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vb5w\" (UniqueName: \"kubernetes.io/projected/48cad1cd-183b-475c-b2c5-0d5131dfaa36-kube-api-access-5vb5w\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200719 kubelet[3561]: I0213 16:06:00.201013 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-xtables-lock\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200719 kubelet[3561]: I0213 16:06:00.201136 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-cni-bin-dir\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.200719 kubelet[3561]: I0213 16:06:00.201189 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/48cad1cd-183b-475c-b2c5-0d5131dfaa36-cni-net-dir\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.202020 kubelet[3561]: I0213 16:06:00.201262 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/48cad1cd-183b-475c-b2c5-0d5131dfaa36-node-certs\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.202020 kubelet[3561]: I0213 16:06:00.201337 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48cad1cd-183b-475c-b2c5-0d5131dfaa36-tigera-ca-bundle\") pod \"calico-node-d6lrl\" (UID: \"48cad1cd-183b-475c-b2c5-0d5131dfaa36\") " pod="calico-system/calico-node-d6lrl" Feb 13 16:06:00.228481 containerd[2047]: time="2025-02-13T16:06:00.228356939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8487945587-9k5f7,Uid:265b878b-91ee-47ec-b9f7-4626790fe9e8,Namespace:calico-system,Attempt:0,}" Feb 13 16:06:00.286110 containerd[2047]: time="2025-02-13T16:06:00.282121560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:00.286110 containerd[2047]: time="2025-02-13T16:06:00.285034332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:00.286110 containerd[2047]: time="2025-02-13T16:06:00.285787800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:00.288456 containerd[2047]: time="2025-02-13T16:06:00.288063300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:00.314109 kubelet[3561]: E0213 16:06:00.313837 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.314109 kubelet[3561]: W0213 16:06:00.313868 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.314109 kubelet[3561]: E0213 16:06:00.313920 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.322589 kubelet[3561]: E0213 16:06:00.322247 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.322589 kubelet[3561]: W0213 16:06:00.322299 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.323571 kubelet[3561]: E0213 16:06:00.323512 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.335640 kubelet[3561]: E0213 16:06:00.335582 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.335809 kubelet[3561]: W0213 16:06:00.335645 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.345463 kubelet[3561]: E0213 16:06:00.340778 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.351478 kubelet[3561]: E0213 16:06:00.349818 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.351478 kubelet[3561]: W0213 16:06:00.349904 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.351478 kubelet[3561]: E0213 16:06:00.351052 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.365184 kubelet[3561]: E0213 16:06:00.365126 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.365184 kubelet[3561]: W0213 16:06:00.365187 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.365399 kubelet[3561]: E0213 16:06:00.365227 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.368654 kubelet[3561]: E0213 16:06:00.366565 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.368654 kubelet[3561]: W0213 16:06:00.366672 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.368654 kubelet[3561]: E0213 16:06:00.366749 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.368654 kubelet[3561]: E0213 16:06:00.367610 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.368654 kubelet[3561]: W0213 16:06:00.368099 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.368654 kubelet[3561]: E0213 16:06:00.368135 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.369172 kubelet[3561]: E0213 16:06:00.368897 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.369172 kubelet[3561]: W0213 16:06:00.368924 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.369172 kubelet[3561]: E0213 16:06:00.368995 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.379938 kubelet[3561]: I0213 16:06:00.379620 3561 topology_manager.go:215] "Topology Admit Handler" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" podNamespace="calico-system" podName="csi-node-driver-hd4qw" Feb 13 16:06:00.386766 kubelet[3561]: E0213 16:06:00.386692 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:00.388483 kubelet[3561]: E0213 16:06:00.388047 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.389152 kubelet[3561]: W0213 16:06:00.388889 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.389152 kubelet[3561]: E0213 16:06:00.388958 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.390791 kubelet[3561]: E0213 16:06:00.390558 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.391511 kubelet[3561]: W0213 16:06:00.391083 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.391511 kubelet[3561]: E0213 16:06:00.391136 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.394039 kubelet[3561]: E0213 16:06:00.393762 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.394039 kubelet[3561]: W0213 16:06:00.393806 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.394039 kubelet[3561]: E0213 16:06:00.393846 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.395313 kubelet[3561]: E0213 16:06:00.395076 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.395313 kubelet[3561]: W0213 16:06:00.395106 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.395313 kubelet[3561]: E0213 16:06:00.395141 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.396239 kubelet[3561]: E0213 16:06:00.396019 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.396239 kubelet[3561]: W0213 16:06:00.396051 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.396239 kubelet[3561]: E0213 16:06:00.396089 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.397104 kubelet[3561]: E0213 16:06:00.396883 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.397104 kubelet[3561]: W0213 16:06:00.396914 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.397104 kubelet[3561]: E0213 16:06:00.396947 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.398538 kubelet[3561]: E0213 16:06:00.398270 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.398538 kubelet[3561]: W0213 16:06:00.398303 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.398538 kubelet[3561]: E0213 16:06:00.398340 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.399006 kubelet[3561]: E0213 16:06:00.398979 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.399278 kubelet[3561]: W0213 16:06:00.399076 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.399278 kubelet[3561]: E0213 16:06:00.399113 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.400595 kubelet[3561]: E0213 16:06:00.400537 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.401406 kubelet[3561]: W0213 16:06:00.401342 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.401708 kubelet[3561]: E0213 16:06:00.401659 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.402541 kubelet[3561]: E0213 16:06:00.402406 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.403317 kubelet[3561]: W0213 16:06:00.403156 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.403704 kubelet[3561]: E0213 16:06:00.403555 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.405709 kubelet[3561]: E0213 16:06:00.405558 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.405709 kubelet[3561]: W0213 16:06:00.405601 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.405709 kubelet[3561]: E0213 16:06:00.405642 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.412059 kubelet[3561]: E0213 16:06:00.411984 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.412059 kubelet[3561]: W0213 16:06:00.412036 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.412278 kubelet[3561]: E0213 16:06:00.412075 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.421057 kubelet[3561]: E0213 16:06:00.420965 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.421057 kubelet[3561]: W0213 16:06:00.421037 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.421335 kubelet[3561]: E0213 16:06:00.421093 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.427485 kubelet[3561]: E0213 16:06:00.426349 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.427691 kubelet[3561]: W0213 16:06:00.427553 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.431867 kubelet[3561]: E0213 16:06:00.427962 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.434541 kubelet[3561]: E0213 16:06:00.434180 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.434541 kubelet[3561]: W0213 16:06:00.434221 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.434541 kubelet[3561]: E0213 16:06:00.434258 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.439477 kubelet[3561]: E0213 16:06:00.437279 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.439477 kubelet[3561]: W0213 16:06:00.437368 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.439477 kubelet[3561]: E0213 16:06:00.437459 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.442509 kubelet[3561]: E0213 16:06:00.440955 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.442509 kubelet[3561]: W0213 16:06:00.441057 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.442509 kubelet[3561]: E0213 16:06:00.441147 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.445994 kubelet[3561]: E0213 16:06:00.445574 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.445994 kubelet[3561]: W0213 16:06:00.445626 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.445994 kubelet[3561]: E0213 16:06:00.445675 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.446816 kubelet[3561]: E0213 16:06:00.446741 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.446816 kubelet[3561]: W0213 16:06:00.446798 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.447233 kubelet[3561]: E0213 16:06:00.446839 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.450371 kubelet[3561]: E0213 16:06:00.447486 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.450371 kubelet[3561]: W0213 16:06:00.447630 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.450371 kubelet[3561]: E0213 16:06:00.447699 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.453895 kubelet[3561]: E0213 16:06:00.453590 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.453895 kubelet[3561]: W0213 16:06:00.453671 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.453895 kubelet[3561]: E0213 16:06:00.453725 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.453895 kubelet[3561]: I0213 16:06:00.453791 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef-socket-dir\") pod \"csi-node-driver-hd4qw\" (UID: \"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef\") " pod="calico-system/csi-node-driver-hd4qw" Feb 13 16:06:00.456533 kubelet[3561]: E0213 16:06:00.456241 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.456533 kubelet[3561]: W0213 16:06:00.456326 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.456533 kubelet[3561]: E0213 16:06:00.456406 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.456986 kubelet[3561]: I0213 16:06:00.456697 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef-varrun\") pod \"csi-node-driver-hd4qw\" (UID: \"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef\") " pod="calico-system/csi-node-driver-hd4qw" Feb 13 16:06:00.462836 kubelet[3561]: E0213 16:06:00.457231 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.462836 kubelet[3561]: W0213 16:06:00.457322 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.462836 kubelet[3561]: E0213 16:06:00.457395 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.462836 kubelet[3561]: I0213 16:06:00.457657 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5chb\" (UniqueName: \"kubernetes.io/projected/a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef-kube-api-access-b5chb\") pod \"csi-node-driver-hd4qw\" (UID: \"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef\") " pod="calico-system/csi-node-driver-hd4qw" Feb 13 16:06:00.462836 kubelet[3561]: E0213 16:06:00.458759 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.462836 kubelet[3561]: W0213 16:06:00.458830 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.462836 kubelet[3561]: E0213 16:06:00.458890 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.462836 kubelet[3561]: I0213 16:06:00.458970 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef-kubelet-dir\") pod \"csi-node-driver-hd4qw\" (UID: \"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef\") " pod="calico-system/csi-node-driver-hd4qw" Feb 13 16:06:00.462836 kubelet[3561]: E0213 16:06:00.459590 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.465733 kubelet[3561]: W0213 16:06:00.459648 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.465733 kubelet[3561]: E0213 16:06:00.459708 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.465733 kubelet[3561]: I0213 16:06:00.459805 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef-registration-dir\") pod \"csi-node-driver-hd4qw\" (UID: \"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef\") " pod="calico-system/csi-node-driver-hd4qw" Feb 13 16:06:00.475458 kubelet[3561]: E0213 16:06:00.470865 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.475458 kubelet[3561]: W0213 16:06:00.470912 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.475458 kubelet[3561]: E0213 16:06:00.470978 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.475458 kubelet[3561]: E0213 16:06:00.471744 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.475458 kubelet[3561]: W0213 16:06:00.471766 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.475458 kubelet[3561]: E0213 16:06:00.471827 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.475458 kubelet[3561]: E0213 16:06:00.473038 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.475458 kubelet[3561]: W0213 16:06:00.473284 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.475458 kubelet[3561]: E0213 16:06:00.473369 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.475458 kubelet[3561]: E0213 16:06:00.474833 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.476960 kubelet[3561]: W0213 16:06:00.474865 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.476960 kubelet[3561]: E0213 16:06:00.475024 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.476960 kubelet[3561]: E0213 16:06:00.476755 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.476960 kubelet[3561]: W0213 16:06:00.476791 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.476960 kubelet[3561]: E0213 16:06:00.476845 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.479270 kubelet[3561]: E0213 16:06:00.477742 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.479270 kubelet[3561]: W0213 16:06:00.477781 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.479270 kubelet[3561]: E0213 16:06:00.477831 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.479270 kubelet[3561]: E0213 16:06:00.479086 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.479270 kubelet[3561]: W0213 16:06:00.479133 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.479270 kubelet[3561]: E0213 16:06:00.479171 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.481187 kubelet[3561]: E0213 16:06:00.480296 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.481187 kubelet[3561]: W0213 16:06:00.480349 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.481187 kubelet[3561]: E0213 16:06:00.480386 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.488454 kubelet[3561]: E0213 16:06:00.486033 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.488454 kubelet[3561]: W0213 16:06:00.486283 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.488454 kubelet[3561]: E0213 16:06:00.486328 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.488454 kubelet[3561]: E0213 16:06:00.487577 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.488454 kubelet[3561]: W0213 16:06:00.487607 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.488454 kubelet[3561]: E0213 16:06:00.487965 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.492987 containerd[2047]: time="2025-02-13T16:06:00.492740509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d6lrl,Uid:48cad1cd-183b-475c-b2c5-0d5131dfaa36,Namespace:calico-system,Attempt:0,}" Feb 13 16:06:00.562995 kubelet[3561]: E0213 16:06:00.562825 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.563127 kubelet[3561]: W0213 16:06:00.563081 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.565299 kubelet[3561]: E0213 16:06:00.563124 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.566089 kubelet[3561]: E0213 16:06:00.566019 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.566089 kubelet[3561]: W0213 16:06:00.566057 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.566305 kubelet[3561]: E0213 16:06:00.566099 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.568819 kubelet[3561]: E0213 16:06:00.568064 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.568819 kubelet[3561]: W0213 16:06:00.568105 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.568819 kubelet[3561]: E0213 16:06:00.568143 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.573354 kubelet[3561]: E0213 16:06:00.572540 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.573354 kubelet[3561]: W0213 16:06:00.572582 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.573354 kubelet[3561]: E0213 16:06:00.572659 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.578252 kubelet[3561]: E0213 16:06:00.577866 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.578252 kubelet[3561]: W0213 16:06:00.577909 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.578252 kubelet[3561]: E0213 16:06:00.577983 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.580472 kubelet[3561]: E0213 16:06:00.579999 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.580472 kubelet[3561]: W0213 16:06:00.580113 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.584085 kubelet[3561]: E0213 16:06:00.583408 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.590819 kubelet[3561]: E0213 16:06:00.589396 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.590819 kubelet[3561]: W0213 16:06:00.589861 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.590819 kubelet[3561]: E0213 16:06:00.589952 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.592530 kubelet[3561]: E0213 16:06:00.592472 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.592530 kubelet[3561]: W0213 16:06:00.592511 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.595463 kubelet[3561]: E0213 16:06:00.594533 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.598117 kubelet[3561]: E0213 16:06:00.598065 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.598117 kubelet[3561]: W0213 16:06:00.598105 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.599731 kubelet[3561]: E0213 16:06:00.599653 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.610809 kubelet[3561]: E0213 16:06:00.609958 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.611616 kubelet[3561]: W0213 16:06:00.610726 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.611816 kubelet[3561]: E0213 16:06:00.611612 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.617512 kubelet[3561]: E0213 16:06:00.617453 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.617512 kubelet[3561]: W0213 16:06:00.617499 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.618341 kubelet[3561]: E0213 16:06:00.617911 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.620945 kubelet[3561]: E0213 16:06:00.620515 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.620945 kubelet[3561]: W0213 16:06:00.620589 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.621766 kubelet[3561]: E0213 16:06:00.621134 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.623830 kubelet[3561]: E0213 16:06:00.623772 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.623830 kubelet[3561]: W0213 16:06:00.623820 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.624375 kubelet[3561]: E0213 16:06:00.623958 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.630373 kubelet[3561]: E0213 16:06:00.629990 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.630373 kubelet[3561]: W0213 16:06:00.630033 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.631762 kubelet[3561]: E0213 16:06:00.630799 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.632513 kubelet[3561]: E0213 16:06:00.632194 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.632513 kubelet[3561]: W0213 16:06:00.632237 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.634639 kubelet[3561]: E0213 16:06:00.634584 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.642022 kubelet[3561]: E0213 16:06:00.641669 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.642022 kubelet[3561]: W0213 16:06:00.641742 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.642022 kubelet[3561]: E0213 16:06:00.641894 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.643663 kubelet[3561]: E0213 16:06:00.643372 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.643663 kubelet[3561]: W0213 16:06:00.643499 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.644373 kubelet[3561]: E0213 16:06:00.644232 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.646191 kubelet[3561]: E0213 16:06:00.645663 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.646191 kubelet[3561]: W0213 16:06:00.645731 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.646191 kubelet[3561]: E0213 16:06:00.645906 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.646881 kubelet[3561]: E0213 16:06:00.646817 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.646881 kubelet[3561]: W0213 16:06:00.646852 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.647290 kubelet[3561]: E0213 16:06:00.647057 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.649333 kubelet[3561]: E0213 16:06:00.649254 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.649333 kubelet[3561]: W0213 16:06:00.649309 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.650957 containerd[2047]: time="2025-02-13T16:06:00.646048309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:00.651108 kubelet[3561]: E0213 16:06:00.650314 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.651778 kubelet[3561]: E0213 16:06:00.651706 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.651778 kubelet[3561]: W0213 16:06:00.651764 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.653160 kubelet[3561]: E0213 16:06:00.652671 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.653160 kubelet[3561]: W0213 16:06:00.652707 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.653160 kubelet[3561]: E0213 16:06:00.653154 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.653669 kubelet[3561]: E0213 16:06:00.653281 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.655028 containerd[2047]: time="2025-02-13T16:06:00.651599365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:00.655028 containerd[2047]: time="2025-02-13T16:06:00.651679249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:00.655028 containerd[2047]: time="2025-02-13T16:06:00.654550345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:00.656665 kubelet[3561]: E0213 16:06:00.654800 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.656665 kubelet[3561]: W0213 16:06:00.654931 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.658714 kubelet[3561]: E0213 16:06:00.656628 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.662060 kubelet[3561]: E0213 16:06:00.662007 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.662060 kubelet[3561]: W0213 16:06:00.662051 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.668985 kubelet[3561]: E0213 16:06:00.668907 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.669207 kubelet[3561]: E0213 16:06:00.669113 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.669207 kubelet[3561]: W0213 16:06:00.669135 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.669207 kubelet[3561]: E0213 16:06:00.669172 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.678805 kubelet[3561]: E0213 16:06:00.677622 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:00.679553 kubelet[3561]: W0213 16:06:00.679486 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:00.679553 kubelet[3561]: E0213 16:06:00.679550 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:00.701132 containerd[2047]: time="2025-02-13T16:06:00.701002646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8487945587-9k5f7,Uid:265b878b-91ee-47ec-b9f7-4626790fe9e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"43a1cd44cac1a4c00045369fe8487bb722752face7d9ca32e2ab999522e8391e\"" Feb 13 16:06:00.711117 containerd[2047]: time="2025-02-13T16:06:00.710140142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 16:06:00.829534 containerd[2047]: time="2025-02-13T16:06:00.828961670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d6lrl,Uid:48cad1cd-183b-475c-b2c5-0d5131dfaa36,Namespace:calico-system,Attempt:0,} returns sandbox id \"feb30b19ae4e50ef947030e0a1212ef2af5e87eb0a6c175c7ec9a86e273d144c\"" Feb 13 16:06:02.430868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724973547.mount: Deactivated successfully. Feb 13 16:06:02.509553 kubelet[3561]: E0213 16:06:02.508837 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:03.507327 containerd[2047]: time="2025-02-13T16:06:03.507207172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:03.509941 containerd[2047]: time="2025-02-13T16:06:03.509403016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 16:06:03.511643 containerd[2047]: time="2025-02-13T16:06:03.511578700Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:03.516574 containerd[2047]: time="2025-02-13T16:06:03.516411808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:03.518460 containerd[2047]: time="2025-02-13T16:06:03.518004832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.807769326s" Feb 13 16:06:03.518460 containerd[2047]: time="2025-02-13T16:06:03.518065420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 16:06:03.520390 containerd[2047]: time="2025-02-13T16:06:03.520332868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 16:06:03.553340 containerd[2047]: time="2025-02-13T16:06:03.553113952Z" level=info msg="CreateContainer within sandbox \"43a1cd44cac1a4c00045369fe8487bb722752face7d9ca32e2ab999522e8391e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 16:06:03.609612 containerd[2047]: time="2025-02-13T16:06:03.609527356Z" level=info msg="CreateContainer within sandbox \"43a1cd44cac1a4c00045369fe8487bb722752face7d9ca32e2ab999522e8391e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"58e2667b95f246b48a214e275c39232d7ab750da3ceb3eab51ff1a5c184a817e\"" Feb 13 16:06:03.611768 containerd[2047]: time="2025-02-13T16:06:03.611660008Z" level=info msg="StartContainer for \"58e2667b95f246b48a214e275c39232d7ab750da3ceb3eab51ff1a5c184a817e\"" Feb 13 16:06:03.766406 containerd[2047]: time="2025-02-13T16:06:03.766021613Z" level=info msg="StartContainer for \"58e2667b95f246b48a214e275c39232d7ab750da3ceb3eab51ff1a5c184a817e\" returns successfully" Feb 13 16:06:04.508110 kubelet[3561]: E0213 16:06:04.507909 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:04.788099 kubelet[3561]: E0213 16:06:04.783976 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.788099 kubelet[3561]: W0213 16:06:04.784020 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.788099 kubelet[3561]: E0213 16:06:04.784062 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.788099 kubelet[3561]: E0213 16:06:04.786368 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.788099 kubelet[3561]: W0213 16:06:04.786601 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.788099 kubelet[3561]: E0213 16:06:04.787634 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.788099 kubelet[3561]: I0213 16:06:04.787234 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8487945587-9k5f7" podStartSLOduration=2.97380418 podStartE2EDuration="5.787164534s" podCreationTimestamp="2025-02-13 16:05:59 +0000 UTC" firstStartedPulling="2025-02-13 16:06:00.705257042 +0000 UTC m=+23.506869214" lastFinishedPulling="2025-02-13 16:06:03.518617384 +0000 UTC m=+26.320229568" observedRunningTime="2025-02-13 16:06:04.786682698 +0000 UTC m=+27.588294870" watchObservedRunningTime="2025-02-13 16:06:04.787164534 +0000 UTC m=+27.588776718" Feb 13 16:06:04.792207 kubelet[3561]: E0213 16:06:04.792121 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.792842 kubelet[3561]: W0213 16:06:04.792767 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.793135 kubelet[3561]: E0213 16:06:04.792858 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.795053 kubelet[3561]: E0213 16:06:04.794623 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.795053 kubelet[3561]: W0213 16:06:04.794675 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.795053 kubelet[3561]: E0213 16:06:04.794732 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.795796 kubelet[3561]: E0213 16:06:04.795755 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.795796 kubelet[3561]: W0213 16:06:04.795793 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.795983 kubelet[3561]: E0213 16:06:04.795834 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.796520 kubelet[3561]: E0213 16:06:04.796410 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.796520 kubelet[3561]: W0213 16:06:04.796518 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.796942 kubelet[3561]: E0213 16:06:04.796566 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.797159 kubelet[3561]: E0213 16:06:04.797119 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.797249 kubelet[3561]: W0213 16:06:04.797158 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.797249 kubelet[3561]: E0213 16:06:04.797196 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.797804 kubelet[3561]: E0213 16:06:04.797770 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.797804 kubelet[3561]: W0213 16:06:04.797802 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.798326 kubelet[3561]: E0213 16:06:04.797839 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.798404 kubelet[3561]: E0213 16:06:04.798335 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.798404 kubelet[3561]: W0213 16:06:04.798362 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.798404 kubelet[3561]: E0213 16:06:04.798402 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.799109 kubelet[3561]: E0213 16:06:04.799007 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.799109 kubelet[3561]: W0213 16:06:04.799048 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.799109 kubelet[3561]: E0213 16:06:04.799095 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.799670 kubelet[3561]: E0213 16:06:04.799616 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.799670 kubelet[3561]: W0213 16:06:04.799640 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.799670 kubelet[3561]: E0213 16:06:04.799670 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.800321 kubelet[3561]: E0213 16:06:04.800278 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.800533 kubelet[3561]: W0213 16:06:04.800322 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.800533 kubelet[3561]: E0213 16:06:04.800382 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.801307 kubelet[3561]: E0213 16:06:04.801183 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.801307 kubelet[3561]: W0213 16:06:04.801255 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.801888 kubelet[3561]: E0213 16:06:04.801315 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.802042 kubelet[3561]: E0213 16:06:04.801977 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.802042 kubelet[3561]: W0213 16:06:04.802009 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.802521 kubelet[3561]: E0213 16:06:04.802066 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.803080 kubelet[3561]: E0213 16:06:04.802818 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.803080 kubelet[3561]: W0213 16:06:04.802858 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.803080 kubelet[3561]: E0213 16:06:04.802920 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.856955 kubelet[3561]: E0213 16:06:04.856904 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.857996 kubelet[3561]: W0213 16:06:04.857616 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.857996 kubelet[3561]: E0213 16:06:04.857764 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.859159 kubelet[3561]: E0213 16:06:04.858908 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.859159 kubelet[3561]: W0213 16:06:04.858937 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.859159 kubelet[3561]: E0213 16:06:04.858978 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.859594 kubelet[3561]: E0213 16:06:04.859560 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.860069 kubelet[3561]: W0213 16:06:04.859733 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.860069 kubelet[3561]: E0213 16:06:04.860010 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.860530 kubelet[3561]: E0213 16:06:04.860507 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.860659 kubelet[3561]: W0213 16:06:04.860636 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.860803 kubelet[3561]: E0213 16:06:04.860783 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.861374 kubelet[3561]: E0213 16:06:04.861327 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.861374 kubelet[3561]: W0213 16:06:04.861359 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.861566 kubelet[3561]: E0213 16:06:04.861402 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.862198 kubelet[3561]: E0213 16:06:04.861960 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.862198 kubelet[3561]: W0213 16:06:04.861993 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.862582 kubelet[3561]: E0213 16:06:04.862319 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.863235 kubelet[3561]: E0213 16:06:04.862932 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.863235 kubelet[3561]: W0213 16:06:04.862965 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.863235 kubelet[3561]: E0213 16:06:04.863013 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.863594 kubelet[3561]: E0213 16:06:04.863560 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.863775 kubelet[3561]: W0213 16:06:04.863598 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.863775 kubelet[3561]: E0213 16:06:04.863661 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.863987 kubelet[3561]: E0213 16:06:04.863953 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.863987 kubelet[3561]: W0213 16:06:04.863984 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.864209 kubelet[3561]: E0213 16:06:04.864043 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.864454 kubelet[3561]: E0213 16:06:04.864406 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.864636 kubelet[3561]: W0213 16:06:04.864467 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.864636 kubelet[3561]: E0213 16:06:04.864531 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.864881 kubelet[3561]: E0213 16:06:04.864839 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.864881 kubelet[3561]: W0213 16:06:04.864868 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.865005 kubelet[3561]: E0213 16:06:04.864906 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.865937 kubelet[3561]: E0213 16:06:04.865293 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.865937 kubelet[3561]: W0213 16:06:04.865324 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.865937 kubelet[3561]: E0213 16:06:04.865795 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.865937 kubelet[3561]: W0213 16:06:04.865817 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.865937 kubelet[3561]: E0213 16:06:04.865881 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.865937 kubelet[3561]: E0213 16:06:04.865894 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.866554 kubelet[3561]: E0213 16:06:04.866216 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.866554 kubelet[3561]: W0213 16:06:04.866233 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.866554 kubelet[3561]: E0213 16:06:04.866272 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.866806 kubelet[3561]: E0213 16:06:04.866702 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.866806 kubelet[3561]: W0213 16:06:04.866719 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.866806 kubelet[3561]: E0213 16:06:04.866760 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.867599 kubelet[3561]: E0213 16:06:04.867558 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.867599 kubelet[3561]: W0213 16:06:04.867591 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.867790 kubelet[3561]: E0213 16:06:04.867640 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.868461 kubelet[3561]: E0213 16:06:04.868390 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.868461 kubelet[3561]: W0213 16:06:04.868442 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.868613 kubelet[3561]: E0213 16:06:04.868516 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:04.868922 kubelet[3561]: E0213 16:06:04.868896 3561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:06:04.869048 kubelet[3561]: W0213 16:06:04.868921 3561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:06:04.869048 kubelet[3561]: E0213 16:06:04.868949 3561 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:06:05.259584 containerd[2047]: time="2025-02-13T16:06:05.259496140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:05.261849 containerd[2047]: time="2025-02-13T16:06:05.261705772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 16:06:05.263252 containerd[2047]: time="2025-02-13T16:06:05.263162788Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:05.268363 containerd[2047]: time="2025-02-13T16:06:05.268275352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:05.270194 containerd[2047]: time="2025-02-13T16:06:05.270125188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.7497229s" Feb 13 16:06:05.270528 containerd[2047]: time="2025-02-13T16:06:05.270495184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 16:06:05.275553 containerd[2047]: time="2025-02-13T16:06:05.274548832Z" level=info msg="CreateContainer within sandbox \"feb30b19ae4e50ef947030e0a1212ef2af5e87eb0a6c175c7ec9a86e273d144c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 16:06:05.313918 containerd[2047]: time="2025-02-13T16:06:05.313290305Z" level=info msg="CreateContainer within sandbox \"feb30b19ae4e50ef947030e0a1212ef2af5e87eb0a6c175c7ec9a86e273d144c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b709ca9a0d32c6790925451a5971782951e64925755292052751781f87ff0eb4\"" Feb 13 16:06:05.315831 containerd[2047]: time="2025-02-13T16:06:05.315726509Z" level=info msg="StartContainer for \"b709ca9a0d32c6790925451a5971782951e64925755292052751781f87ff0eb4\"" Feb 13 16:06:05.451688 containerd[2047]: time="2025-02-13T16:06:05.451617929Z" level=info msg="StartContainer for \"b709ca9a0d32c6790925451a5971782951e64925755292052751781f87ff0eb4\" returns successfully" Feb 13 16:06:05.519365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b709ca9a0d32c6790925451a5971782951e64925755292052751781f87ff0eb4-rootfs.mount: Deactivated successfully. Feb 13 16:06:05.768921 kubelet[3561]: I0213 16:06:05.768780 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:06.508147 kubelet[3561]: E0213 16:06:06.508058 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:06.721899 containerd[2047]: time="2025-02-13T16:06:06.721761416Z" level=info msg="shim disconnected" id=b709ca9a0d32c6790925451a5971782951e64925755292052751781f87ff0eb4 namespace=k8s.io Feb 13 16:06:06.721899 containerd[2047]: time="2025-02-13T16:06:06.721894964Z" level=warning msg="cleaning up after shim disconnected" id=b709ca9a0d32c6790925451a5971782951e64925755292052751781f87ff0eb4 namespace=k8s.io Feb 13 16:06:06.721899 containerd[2047]: time="2025-02-13T16:06:06.721919288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:06:06.784234 containerd[2047]: time="2025-02-13T16:06:06.781915232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 16:06:08.509036 kubelet[3561]: E0213 16:06:08.508701 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:10.508905 kubelet[3561]: E0213 16:06:10.508571 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:11.782197 containerd[2047]: time="2025-02-13T16:06:11.781894537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:11.783903 containerd[2047]: time="2025-02-13T16:06:11.783770077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 16:06:11.785719 containerd[2047]: time="2025-02-13T16:06:11.785618413Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:11.792992 containerd[2047]: time="2025-02-13T16:06:11.792850993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:11.794741 containerd[2047]: time="2025-02-13T16:06:11.794485141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 5.012499937s" Feb 13 16:06:11.794741 containerd[2047]: time="2025-02-13T16:06:11.794551237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 16:06:11.798727 containerd[2047]: time="2025-02-13T16:06:11.798391393Z" level=info msg="CreateContainer within sandbox \"feb30b19ae4e50ef947030e0a1212ef2af5e87eb0a6c175c7ec9a86e273d144c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 16:06:11.827079 containerd[2047]: time="2025-02-13T16:06:11.826990477Z" level=info msg="CreateContainer within sandbox \"feb30b19ae4e50ef947030e0a1212ef2af5e87eb0a6c175c7ec9a86e273d144c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b22ce9c5da06cee5ad85638b753ad204f66596c70eeff0a46951e4c15c5f1711\"" Feb 13 16:06:11.836583 containerd[2047]: time="2025-02-13T16:06:11.830718013Z" level=info msg="StartContainer for \"b22ce9c5da06cee5ad85638b753ad204f66596c70eeff0a46951e4c15c5f1711\"" Feb 13 16:06:11.831476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968292777.mount: Deactivated successfully. Feb 13 16:06:11.959019 containerd[2047]: time="2025-02-13T16:06:11.958935098Z" level=info msg="StartContainer for \"b22ce9c5da06cee5ad85638b753ad204f66596c70eeff0a46951e4c15c5f1711\" returns successfully" Feb 13 16:06:12.507866 kubelet[3561]: E0213 16:06:12.507822 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:14.034272 containerd[2047]: time="2025-02-13T16:06:14.034181952Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:06:14.076346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b22ce9c5da06cee5ad85638b753ad204f66596c70eeff0a46951e4c15c5f1711-rootfs.mount: Deactivated successfully. Feb 13 16:06:14.099181 kubelet[3561]: I0213 16:06:14.099044 3561 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:06:14.156524 kubelet[3561]: I0213 16:06:14.156450 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f417de96-d005-4690-babe-3dd9712f90ee" podNamespace="kube-system" podName="coredns-76f75df574-2v6bn" Feb 13 16:06:14.168049 kubelet[3561]: I0213 16:06:14.167733 3561 topology_manager.go:215] "Topology Admit Handler" podUID="715df74d-104a-4b7c-8355-d8e0a4d0f71b" podNamespace="calico-apiserver" podName="calico-apiserver-6b77f6fc57-mmd5c" Feb 13 16:06:14.170349 kubelet[3561]: I0213 16:06:14.168607 3561 topology_manager.go:215] "Topology Admit Handler" podUID="c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42" podNamespace="kube-system" podName="coredns-76f75df574-vhcmn" Feb 13 16:06:14.171672 kubelet[3561]: I0213 16:06:14.171593 3561 topology_manager.go:215] "Topology Admit Handler" podUID="f8b312d7-5730-4769-8e43-9048d8afafd5" podNamespace="calico-system" podName="calico-kube-controllers-57f4fd4464-94h72" Feb 13 16:06:14.183532 kubelet[3561]: I0213 16:06:14.179247 3561 topology_manager.go:215] "Topology Admit Handler" podUID="7fdafe03-6c5b-493a-8ab3-33c001bd2fdc" podNamespace="calico-apiserver" podName="calico-apiserver-6b77f6fc57-kzwvn" Feb 13 16:06:14.195015 kubelet[3561]: W0213 16:06:14.193121 3561 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-49" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-49' and this object Feb 13 16:06:14.195015 kubelet[3561]: E0213 16:06:14.193186 3561 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-49" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-49' and this object Feb 13 16:06:14.242581 kubelet[3561]: I0213 16:06:14.242530 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/715df74d-104a-4b7c-8355-d8e0a4d0f71b-calico-apiserver-certs\") pod \"calico-apiserver-6b77f6fc57-mmd5c\" (UID: \"715df74d-104a-4b7c-8355-d8e0a4d0f71b\") " pod="calico-apiserver/calico-apiserver-6b77f6fc57-mmd5c" Feb 13 16:06:14.243127 kubelet[3561]: I0213 16:06:14.243078 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8b312d7-5730-4769-8e43-9048d8afafd5-tigera-ca-bundle\") pod \"calico-kube-controllers-57f4fd4464-94h72\" (UID: \"f8b312d7-5730-4769-8e43-9048d8afafd5\") " pod="calico-system/calico-kube-controllers-57f4fd4464-94h72" Feb 13 16:06:14.243477 kubelet[3561]: I0213 16:06:14.243406 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctjl\" (UniqueName: \"kubernetes.io/projected/f8b312d7-5730-4769-8e43-9048d8afafd5-kube-api-access-7ctjl\") pod \"calico-kube-controllers-57f4fd4464-94h72\" (UID: \"f8b312d7-5730-4769-8e43-9048d8afafd5\") " pod="calico-system/calico-kube-controllers-57f4fd4464-94h72" Feb 13 16:06:14.243857 kubelet[3561]: I0213 16:06:14.243798 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtpms\" (UniqueName: \"kubernetes.io/projected/c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42-kube-api-access-dtpms\") pod \"coredns-76f75df574-vhcmn\" (UID: \"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42\") " pod="kube-system/coredns-76f75df574-vhcmn" Feb 13 16:06:14.244126 kubelet[3561]: I0213 16:06:14.244001 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rs2\" (UniqueName: \"kubernetes.io/projected/7fdafe03-6c5b-493a-8ab3-33c001bd2fdc-kube-api-access-t8rs2\") pod \"calico-apiserver-6b77f6fc57-kzwvn\" (UID: \"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc\") " pod="calico-apiserver/calico-apiserver-6b77f6fc57-kzwvn" Feb 13 16:06:14.244126 kubelet[3561]: I0213 16:06:14.244091 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f417de96-d005-4690-babe-3dd9712f90ee-config-volume\") pod \"coredns-76f75df574-2v6bn\" (UID: \"f417de96-d005-4690-babe-3dd9712f90ee\") " pod="kube-system/coredns-76f75df574-2v6bn" Feb 13 16:06:14.244471 kubelet[3561]: I0213 16:06:14.244391 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42-config-volume\") pod \"coredns-76f75df574-vhcmn\" (UID: \"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42\") " pod="kube-system/coredns-76f75df574-vhcmn" Feb 13 16:06:14.244804 kubelet[3561]: I0213 16:06:14.244521 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hb5\" (UniqueName: \"kubernetes.io/projected/f417de96-d005-4690-babe-3dd9712f90ee-kube-api-access-x5hb5\") pod \"coredns-76f75df574-2v6bn\" (UID: \"f417de96-d005-4690-babe-3dd9712f90ee\") " pod="kube-system/coredns-76f75df574-2v6bn" Feb 13 16:06:14.245029 kubelet[3561]: I0213 16:06:14.244862 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggt8b\" (UniqueName: \"kubernetes.io/projected/715df74d-104a-4b7c-8355-d8e0a4d0f71b-kube-api-access-ggt8b\") pod \"calico-apiserver-6b77f6fc57-mmd5c\" (UID: \"715df74d-104a-4b7c-8355-d8e0a4d0f71b\") " pod="calico-apiserver/calico-apiserver-6b77f6fc57-mmd5c" Feb 13 16:06:14.245029 kubelet[3561]: I0213 16:06:14.245019 3561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7fdafe03-6c5b-493a-8ab3-33c001bd2fdc-calico-apiserver-certs\") pod \"calico-apiserver-6b77f6fc57-kzwvn\" (UID: \"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc\") " pod="calico-apiserver/calico-apiserver-6b77f6fc57-kzwvn" Feb 13 16:06:14.382326 kubelet[3561]: I0213 16:06:14.378947 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:14.480053 containerd[2047]: time="2025-02-13T16:06:14.479985842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2v6bn,Uid:f417de96-d005-4690-babe-3dd9712f90ee,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:14.489447 containerd[2047]: time="2025-02-13T16:06:14.489315986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vhcmn,Uid:c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:14.516748 containerd[2047]: time="2025-02-13T16:06:14.516687338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hd4qw,Uid:a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef,Namespace:calico-system,Attempt:0,}" Feb 13 16:06:14.525114 containerd[2047]: time="2025-02-13T16:06:14.524791190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f4fd4464-94h72,Uid:f8b312d7-5730-4769-8e43-9048d8afafd5,Namespace:calico-system,Attempt:0,}" Feb 13 16:06:15.207226 containerd[2047]: time="2025-02-13T16:06:15.207143606Z" level=error msg="Failed to destroy network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.209016 containerd[2047]: time="2025-02-13T16:06:15.207782354Z" level=error msg="encountered an error cleaning up failed sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.209016 containerd[2047]: time="2025-02-13T16:06:15.207870050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vhcmn,Uid:c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.209212 kubelet[3561]: E0213 16:06:15.208849 3561 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.210326 kubelet[3561]: E0213 16:06:15.208986 3561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vhcmn" Feb 13 16:06:15.210326 kubelet[3561]: E0213 16:06:15.209571 3561 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vhcmn" Feb 13 16:06:15.213026 kubelet[3561]: E0213 16:06:15.210521 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vhcmn_kube-system(c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vhcmn_kube-system(c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vhcmn" podUID="c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42" Feb 13 16:06:15.416584 containerd[2047]: time="2025-02-13T16:06:15.416152659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-mmd5c,Uid:715df74d-104a-4b7c-8355-d8e0a4d0f71b,Namespace:calico-apiserver,Attempt:0,}" Feb 13 16:06:15.434798 containerd[2047]: time="2025-02-13T16:06:15.434351931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-kzwvn,Uid:7fdafe03-6c5b-493a-8ab3-33c001bd2fdc,Namespace:calico-apiserver,Attempt:0,}" Feb 13 16:06:15.443748 containerd[2047]: time="2025-02-13T16:06:15.443665323Z" level=error msg="Failed to destroy network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.444362 containerd[2047]: time="2025-02-13T16:06:15.444293487Z" level=error msg="encountered an error cleaning up failed sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.444520 containerd[2047]: time="2025-02-13T16:06:15.444393867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2v6bn,Uid:f417de96-d005-4690-babe-3dd9712f90ee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.446459 kubelet[3561]: E0213 16:06:15.444794 3561 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.446459 kubelet[3561]: E0213 16:06:15.444878 3561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2v6bn" Feb 13 16:06:15.446459 kubelet[3561]: E0213 16:06:15.444921 3561 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2v6bn" Feb 13 16:06:15.448015 kubelet[3561]: E0213 16:06:15.445000 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2v6bn_kube-system(f417de96-d005-4690-babe-3dd9712f90ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2v6bn_kube-system(f417de96-d005-4690-babe-3dd9712f90ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2v6bn" podUID="f417de96-d005-4690-babe-3dd9712f90ee" Feb 13 16:06:15.462288 containerd[2047]: time="2025-02-13T16:06:15.462009627Z" level=error msg="Failed to destroy network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.464135 containerd[2047]: time="2025-02-13T16:06:15.464065983Z" level=error msg="encountered an error cleaning up failed sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.464616 containerd[2047]: time="2025-02-13T16:06:15.464345607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hd4qw,Uid:a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.465105 kubelet[3561]: E0213 16:06:15.465058 3561 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.465258 kubelet[3561]: E0213 16:06:15.465147 3561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hd4qw" Feb 13 16:06:15.465258 kubelet[3561]: E0213 16:06:15.465214 3561 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hd4qw" Feb 13 16:06:15.465458 kubelet[3561]: E0213 16:06:15.465309 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hd4qw_calico-system(a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hd4qw_calico-system(a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:15.494570 containerd[2047]: time="2025-02-13T16:06:15.494468871Z" level=error msg="Failed to destroy network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.495439 containerd[2047]: time="2025-02-13T16:06:15.495349407Z" level=error msg="encountered an error cleaning up failed sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.495574 containerd[2047]: time="2025-02-13T16:06:15.495487995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f4fd4464-94h72,Uid:f8b312d7-5730-4769-8e43-9048d8afafd5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.495894 kubelet[3561]: E0213 16:06:15.495855 3561 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:15.496055 kubelet[3561]: E0213 16:06:15.495962 3561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57f4fd4464-94h72" Feb 13 16:06:15.496143 kubelet[3561]: E0213 16:06:15.496110 3561 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57f4fd4464-94h72" Feb 13 16:06:15.496808 kubelet[3561]: E0213 16:06:15.496752 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57f4fd4464-94h72_calico-system(f8b312d7-5730-4769-8e43-9048d8afafd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57f4fd4464-94h72_calico-system(f8b312d7-5730-4769-8e43-9048d8afafd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57f4fd4464-94h72" podUID="f8b312d7-5730-4769-8e43-9048d8afafd5" Feb 13 16:06:15.775090 containerd[2047]: time="2025-02-13T16:06:15.773670376Z" level=info msg="shim disconnected" id=b22ce9c5da06cee5ad85638b753ad204f66596c70eeff0a46951e4c15c5f1711 namespace=k8s.io Feb 13 16:06:15.776090 containerd[2047]: time="2025-02-13T16:06:15.775574692Z" level=warning msg="cleaning up after shim disconnected" id=b22ce9c5da06cee5ad85638b753ad204f66596c70eeff0a46951e4c15c5f1711 namespace=k8s.io Feb 13 16:06:15.776358 containerd[2047]: time="2025-02-13T16:06:15.775949440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:06:15.838024 kubelet[3561]: I0213 16:06:15.837854 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:15.847660 containerd[2047]: time="2025-02-13T16:06:15.842886269Z" level=info msg="StopPodSandbox for \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\"" Feb 13 16:06:15.851458 containerd[2047]: time="2025-02-13T16:06:15.850898129Z" level=info msg="Ensure that sandbox a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8 in task-service has been cleanup successfully" Feb 13 16:06:15.852449 kubelet[3561]: I0213 16:06:15.852378 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:15.856333 containerd[2047]: time="2025-02-13T16:06:15.856216937Z" level=info msg="StopPodSandbox for \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\"" Feb 13 16:06:15.857822 containerd[2047]: time="2025-02-13T16:06:15.857158601Z" level=info msg="Ensure that sandbox 678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da in task-service has been cleanup successfully" Feb 13 16:06:15.863964 kubelet[3561]: I0213 16:06:15.863692 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:15.871941 containerd[2047]: time="2025-02-13T16:06:15.871626929Z" level=info msg="StopPodSandbox for \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\"" Feb 13 16:06:15.877882 containerd[2047]: time="2025-02-13T16:06:15.877802849Z" level=info msg="Ensure that sandbox d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa in task-service has been cleanup successfully" Feb 13 16:06:15.893202 kubelet[3561]: I0213 16:06:15.891706 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:15.898806 containerd[2047]: time="2025-02-13T16:06:15.897648329Z" level=info msg="StopPodSandbox for \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\"" Feb 13 16:06:15.904357 containerd[2047]: time="2025-02-13T16:06:15.904279517Z" level=info msg="Ensure that sandbox 182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560 in task-service has been cleanup successfully" Feb 13 16:06:16.010289 containerd[2047]: time="2025-02-13T16:06:16.010191554Z" level=error msg="Failed to destroy network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.013663 containerd[2047]: time="2025-02-13T16:06:16.013568282Z" level=error msg="encountered an error cleaning up failed sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.015757 containerd[2047]: time="2025-02-13T16:06:16.013692158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-kzwvn,Uid:7fdafe03-6c5b-493a-8ab3-33c001bd2fdc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.015942 kubelet[3561]: E0213 16:06:16.014104 3561 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.015942 kubelet[3561]: E0213 16:06:16.014193 3561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77f6fc57-kzwvn" Feb 13 16:06:16.015942 kubelet[3561]: E0213 16:06:16.014235 3561 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77f6fc57-kzwvn" Feb 13 16:06:16.016156 kubelet[3561]: E0213 16:06:16.014325 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b77f6fc57-kzwvn_calico-apiserver(7fdafe03-6c5b-493a-8ab3-33c001bd2fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b77f6fc57-kzwvn_calico-apiserver(7fdafe03-6c5b-493a-8ab3-33c001bd2fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77f6fc57-kzwvn" podUID="7fdafe03-6c5b-493a-8ab3-33c001bd2fdc" Feb 13 16:06:16.074986 containerd[2047]: time="2025-02-13T16:06:16.072284942Z" level=error msg="StopPodSandbox for \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\" failed" error="failed to destroy network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.075168 kubelet[3561]: E0213 16:06:16.074589 3561 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:16.075168 kubelet[3561]: E0213 16:06:16.074712 3561 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da"} Feb 13 16:06:16.075168 kubelet[3561]: E0213 16:06:16.074780 3561 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f417de96-d005-4690-babe-3dd9712f90ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:16.075168 kubelet[3561]: E0213 16:06:16.074835 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f417de96-d005-4690-babe-3dd9712f90ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2v6bn" podUID="f417de96-d005-4690-babe-3dd9712f90ee" Feb 13 16:06:16.098061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560-shm.mount: Deactivated successfully. Feb 13 16:06:16.099327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da-shm.mount: Deactivated successfully. Feb 13 16:06:16.099956 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8-shm.mount: Deactivated successfully. Feb 13 16:06:16.119471 containerd[2047]: time="2025-02-13T16:06:16.118386206Z" level=error msg="StopPodSandbox for \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\" failed" error="failed to destroy network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.119649 kubelet[3561]: E0213 16:06:16.118875 3561 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:16.119649 kubelet[3561]: E0213 16:06:16.118972 3561 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa"} Feb 13 16:06:16.119649 kubelet[3561]: E0213 16:06:16.119049 3561 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8b312d7-5730-4769-8e43-9048d8afafd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:16.119649 kubelet[3561]: E0213 16:06:16.119113 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8b312d7-5730-4769-8e43-9048d8afafd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57f4fd4464-94h72" podUID="f8b312d7-5730-4769-8e43-9048d8afafd5" Feb 13 16:06:16.130278 containerd[2047]: time="2025-02-13T16:06:16.129241802Z" level=error msg="Failed to destroy network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.136360 containerd[2047]: time="2025-02-13T16:06:16.131391254Z" level=error msg="encountered an error cleaning up failed sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.136360 containerd[2047]: time="2025-02-13T16:06:16.131984258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-mmd5c,Uid:715df74d-104a-4b7c-8355-d8e0a4d0f71b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.136360 containerd[2047]: time="2025-02-13T16:06:16.132256922Z" level=error msg="StopPodSandbox for \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\" failed" error="failed to destroy network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.137199 kubelet[3561]: E0213 16:06:16.135076 3561 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:16.137199 kubelet[3561]: E0213 16:06:16.135138 3561 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8"} Feb 13 16:06:16.137199 kubelet[3561]: E0213 16:06:16.135205 3561 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:16.137199 kubelet[3561]: E0213 16:06:16.135256 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vhcmn" podUID="c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42" Feb 13 16:06:16.138550 kubelet[3561]: E0213 16:06:16.135313 3561 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.138550 kubelet[3561]: E0213 16:06:16.135363 3561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77f6fc57-mmd5c" Feb 13 16:06:16.141177 kubelet[3561]: E0213 16:06:16.135403 3561 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77f6fc57-mmd5c" Feb 13 16:06:16.141177 kubelet[3561]: E0213 16:06:16.140222 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b77f6fc57-mmd5c_calico-apiserver(715df74d-104a-4b7c-8355-d8e0a4d0f71b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b77f6fc57-mmd5c_calico-apiserver(715df74d-104a-4b7c-8355-d8e0a4d0f71b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77f6fc57-mmd5c" podUID="715df74d-104a-4b7c-8355-d8e0a4d0f71b" Feb 13 16:06:16.144196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c-shm.mount: Deactivated successfully. Feb 13 16:06:16.149647 containerd[2047]: time="2025-02-13T16:06:16.149576366Z" level=error msg="StopPodSandbox for \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\" failed" error="failed to destroy network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.150299 kubelet[3561]: E0213 16:06:16.150231 3561 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:16.150473 kubelet[3561]: E0213 16:06:16.150343 3561 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560"} Feb 13 16:06:16.150473 kubelet[3561]: E0213 16:06:16.150410 3561 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:16.150649 kubelet[3561]: E0213 16:06:16.150504 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hd4qw" podUID="a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef" Feb 13 16:06:16.901665 containerd[2047]: time="2025-02-13T16:06:16.900111942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 16:06:16.906487 kubelet[3561]: I0213 16:06:16.903745 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:16.912806 containerd[2047]: time="2025-02-13T16:06:16.908118078Z" level=info msg="StopPodSandbox for \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\"" Feb 13 16:06:16.912806 containerd[2047]: time="2025-02-13T16:06:16.908613606Z" level=info msg="Ensure that sandbox 8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c in task-service has been cleanup successfully" Feb 13 16:06:16.919798 kubelet[3561]: I0213 16:06:16.919004 3561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:16.922979 containerd[2047]: time="2025-02-13T16:06:16.922174914Z" level=info msg="StopPodSandbox for \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\"" Feb 13 16:06:16.922979 containerd[2047]: time="2025-02-13T16:06:16.922516350Z" level=info msg="Ensure that sandbox 199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905 in task-service has been cleanup successfully" Feb 13 16:06:16.996778 containerd[2047]: time="2025-02-13T16:06:16.996696787Z" level=error msg="StopPodSandbox for \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\" failed" error="failed to destroy network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:16.997277 kubelet[3561]: E0213 16:06:16.997241 3561 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:16.997605 kubelet[3561]: E0213 16:06:16.997583 3561 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c"} Feb 13 16:06:16.997909 kubelet[3561]: E0213 16:06:16.997872 3561 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"715df74d-104a-4b7c-8355-d8e0a4d0f71b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:16.998184 kubelet[3561]: E0213 16:06:16.998137 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"715df74d-104a-4b7c-8355-d8e0a4d0f71b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77f6fc57-mmd5c" podUID="715df74d-104a-4b7c-8355-d8e0a4d0f71b" Feb 13 16:06:17.002937 containerd[2047]: time="2025-02-13T16:06:17.002870031Z" level=error msg="StopPodSandbox for \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\" failed" error="failed to destroy network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:06:17.003544 kubelet[3561]: E0213 16:06:17.003485 3561 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:17.003703 kubelet[3561]: E0213 16:06:17.003579 3561 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905"} Feb 13 16:06:17.003762 kubelet[3561]: E0213 16:06:17.003707 3561 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 16:06:17.003897 kubelet[3561]: E0213 16:06:17.003776 3561 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77f6fc57-kzwvn" podUID="7fdafe03-6c5b-493a-8ab3-33c001bd2fdc" Feb 13 16:06:22.777095 systemd[1]: Started sshd@7-172.31.19.49:22-139.178.68.195:46002.service - OpenSSH per-connection server daemon (139.178.68.195:46002). Feb 13 16:06:22.980613 sshd[4627]: Accepted publickey for core from 139.178.68.195 port 46002 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:22.984694 sshd[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:22.998763 systemd-logind[2020]: New session 8 of user core. Feb 13 16:06:23.008100 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:06:23.391395 sshd[4627]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:23.405384 systemd[1]: sshd@7-172.31.19.49:22-139.178.68.195:46002.service: Deactivated successfully. Feb 13 16:06:23.425210 systemd-logind[2020]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:06:23.428668 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:06:23.435035 systemd-logind[2020]: Removed session 8. Feb 13 16:06:25.651667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725338994.mount: Deactivated successfully. Feb 13 16:06:25.758344 containerd[2047]: time="2025-02-13T16:06:25.758276546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.760245 containerd[2047]: time="2025-02-13T16:06:25.760174202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 16:06:25.761153 containerd[2047]: time="2025-02-13T16:06:25.761025650Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.765057 containerd[2047]: time="2025-02-13T16:06:25.764951078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.766806 containerd[2047]: time="2025-02-13T16:06:25.766527854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 8.866312436s" Feb 13 16:06:25.766806 containerd[2047]: time="2025-02-13T16:06:25.766606526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 16:06:25.814457 containerd[2047]: time="2025-02-13T16:06:25.814181714Z" level=info msg="CreateContainer within sandbox \"feb30b19ae4e50ef947030e0a1212ef2af5e87eb0a6c175c7ec9a86e273d144c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 16:06:25.844825 containerd[2047]: time="2025-02-13T16:06:25.844751834Z" level=info msg="CreateContainer within sandbox \"feb30b19ae4e50ef947030e0a1212ef2af5e87eb0a6c175c7ec9a86e273d144c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"54189c6c82c8f9232311c9cabc30f2ebe43dd766c490769a4f0147e8b3246e61\"" Feb 13 16:06:25.847543 containerd[2047]: time="2025-02-13T16:06:25.846678147Z" level=info msg="StartContainer for \"54189c6c82c8f9232311c9cabc30f2ebe43dd766c490769a4f0147e8b3246e61\"" Feb 13 16:06:25.987693 containerd[2047]: time="2025-02-13T16:06:25.987272151Z" level=info msg="StartContainer for \"54189c6c82c8f9232311c9cabc30f2ebe43dd766c490769a4f0147e8b3246e61\" returns successfully" Feb 13 16:06:26.229171 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 16:06:26.229363 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 16:06:28.424949 systemd[1]: Started sshd@8-172.31.19.49:22-139.178.68.195:38274.service - OpenSSH per-connection server daemon (139.178.68.195:38274). Feb 13 16:06:28.511986 containerd[2047]: time="2025-02-13T16:06:28.510711184Z" level=info msg="StopPodSandbox for \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\"" Feb 13 16:06:28.633468 kernel: bpftool[4873]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 16:06:28.697572 sshd[4845]: Accepted publickey for core from 139.178.68.195 port 38274 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:28.711244 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:28.760007 systemd-logind[2020]: New session 9 of user core. Feb 13 16:06:28.769956 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:06:28.902146 kubelet[3561]: I0213 16:06:28.900711 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-d6lrl" podStartSLOduration=3.967514262 podStartE2EDuration="28.900643554s" podCreationTimestamp="2025-02-13 16:06:00 +0000 UTC" firstStartedPulling="2025-02-13 16:06:00.834330842 +0000 UTC m=+23.635943002" lastFinishedPulling="2025-02-13 16:06:25.767460134 +0000 UTC m=+48.569072294" observedRunningTime="2025-02-13 16:06:27.012588228 +0000 UTC m=+49.814200568" watchObservedRunningTime="2025-02-13 16:06:28.900643554 +0000 UTC m=+51.702255738" Feb 13 16:06:29.254914 sshd[4845]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:28.855 [INFO][4867] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:28.855 [INFO][4867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" iface="eth0" netns="/var/run/netns/cni-049e9f17-ceb1-79a3-b341-ec93137ff02d" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:28.859 [INFO][4867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" iface="eth0" netns="/var/run/netns/cni-049e9f17-ceb1-79a3-b341-ec93137ff02d" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:28.862 [INFO][4867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" iface="eth0" netns="/var/run/netns/cni-049e9f17-ceb1-79a3-b341-ec93137ff02d" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:28.862 [INFO][4867] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:28.862 [INFO][4867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:29.188 [INFO][4889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:29.194 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:29.195 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:29.231 [WARNING][4889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:29.232 [INFO][4889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:29.240 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:29.274494 containerd[2047]: 2025-02-13 16:06:29.255 [INFO][4867] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:29.274494 containerd[2047]: time="2025-02-13T16:06:29.270900760Z" level=info msg="TearDown network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\" successfully" Feb 13 16:06:29.274494 containerd[2047]: time="2025-02-13T16:06:29.270945400Z" level=info msg="StopPodSandbox for \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\" returns successfully" Feb 13 16:06:29.290799 containerd[2047]: time="2025-02-13T16:06:29.283662328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vhcmn,Uid:c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42,Namespace:kube-system,Attempt:1,}" Feb 13 16:06:29.289063 systemd[1]: run-netns-cni\x2d049e9f17\x2dceb1\x2d79a3\x2db341\x2dec93137ff02d.mount: Deactivated successfully. Feb 13 16:06:29.300646 systemd[1]: sshd@8-172.31.19.49:22-139.178.68.195:38274.service: Deactivated successfully. Feb 13 16:06:29.308310 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:06:29.380725 systemd-logind[2020]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:06:29.387705 systemd-logind[2020]: Removed session 9. Feb 13 16:06:29.518006 containerd[2047]: time="2025-02-13T16:06:29.516061973Z" level=info msg="StopPodSandbox for \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\"" Feb 13 16:06:29.528139 containerd[2047]: time="2025-02-13T16:06:29.524904353Z" level=info msg="StopPodSandbox for \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\"" Feb 13 16:06:30.046205 systemd-networkd[1607]: vxlan.calico: Link UP Feb 13 16:06:30.046227 systemd-networkd[1607]: vxlan.calico: Gained carrier Feb 13 16:06:30.054590 (udev-worker)[5002]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:30.089609 systemd-networkd[1607]: calic2cf6ae5527: Link UP Feb 13 16:06:30.092366 (udev-worker)[5001]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:30.094374 systemd-networkd[1607]: calic2cf6ae5527: Gained carrier Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.480 [INFO][4919] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0 coredns-76f75df574- kube-system c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42 812 0 2025-02-13 16:05:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-49 coredns-76f75df574-vhcmn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic2cf6ae5527 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.481 [INFO][4919] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.787 [INFO][4932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" HandleID="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.858 [INFO][4932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" HandleID="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf860), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-49", "pod":"coredns-76f75df574-vhcmn", "timestamp":"2025-02-13 16:06:29.787631166 +0000 UTC"}, Hostname:"ip-172-31-19-49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.859 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.859 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.859 [INFO][4932] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-49' Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.867 [INFO][4932] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.904 [INFO][4932] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.926 [INFO][4932] ipam/ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.948 [INFO][4932] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.964 [INFO][4932] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.965 [INFO][4932] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:29.971 [INFO][4932] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295 Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:30.003 [INFO][4932] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:30.035 [INFO][4932] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.129/26] block=192.168.65.128/26 handle="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:30.043 [INFO][4932] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.129/26] handle="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" host="ip-172-31-19-49" Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:30.043 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:30.161227 containerd[2047]: 2025-02-13 16:06:30.043 [INFO][4932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.129/26] IPv6=[] ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" HandleID="k8s-pod-network.2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:30.167496 containerd[2047]: 2025-02-13 16:06:30.065 [INFO][4919] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"", Pod:"coredns-76f75df574-vhcmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2cf6ae5527", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:30.167496 containerd[2047]: 2025-02-13 16:06:30.066 [INFO][4919] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.129/32] ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:30.167496 containerd[2047]: 2025-02-13 16:06:30.066 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2cf6ae5527 ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:30.167496 containerd[2047]: 2025-02-13 16:06:30.095 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:30.167496 containerd[2047]: 2025-02-13 16:06:30.100 [INFO][4919] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295", Pod:"coredns-76f75df574-vhcmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2cf6ae5527", MAC:"76:40:29:ac:e2:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:30.167496 containerd[2047]: 2025-02-13 16:06:30.146 [INFO][4919] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295" Namespace="kube-system" Pod="coredns-76f75df574-vhcmn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:29.941 [INFO][4972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:29.941 [INFO][4972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" iface="eth0" netns="/var/run/netns/cni-ceffed90-2bd5-de1d-56e1-37f12e6e277e" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:29.942 [INFO][4972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" iface="eth0" netns="/var/run/netns/cni-ceffed90-2bd5-de1d-56e1-37f12e6e277e" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:29.943 [INFO][4972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" iface="eth0" netns="/var/run/netns/cni-ceffed90-2bd5-de1d-56e1-37f12e6e277e" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:29.944 [INFO][4972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:29.944 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:30.252 [INFO][4990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:30.253 [INFO][4990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:30.255 [INFO][4990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:30.328 [WARNING][4990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:30.328 [INFO][4990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:30.334 [INFO][4990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:30.377549 containerd[2047]: 2025-02-13 16:06:30.354 [INFO][4972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:30.387088 containerd[2047]: time="2025-02-13T16:06:30.383196197Z" level=info msg="TearDown network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\" successfully" Feb 13 16:06:30.387088 containerd[2047]: time="2025-02-13T16:06:30.383253845Z" level=info msg="StopPodSandbox for \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\" returns successfully" Feb 13 16:06:30.387721 containerd[2047]: time="2025-02-13T16:06:30.387672209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f4fd4464-94h72,Uid:f8b312d7-5730-4769-8e43-9048d8afafd5,Namespace:calico-system,Attempt:1,}" Feb 13 16:06:30.390670 systemd[1]: run-netns-cni\x2dceffed90\x2d2bd5\x2dde1d\x2d56e1\x2d37f12e6e277e.mount: Deactivated successfully. Feb 13 16:06:30.423811 containerd[2047]: time="2025-02-13T16:06:30.423594089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:30.427211 containerd[2047]: time="2025-02-13T16:06:30.423732269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:30.428594 containerd[2047]: time="2025-02-13T16:06:30.427136345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:30.428594 containerd[2047]: time="2025-02-13T16:06:30.427361057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:30.535038 containerd[2047]: time="2025-02-13T16:06:30.534943338Z" level=info msg="StopPodSandbox for \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\"" Feb 13 16:06:30.547095 containerd[2047]: time="2025-02-13T16:06:30.535299750Z" level=info msg="StopPodSandbox for \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\"" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:29.953 [INFO][4971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:29.953 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" iface="eth0" netns="/var/run/netns/cni-8fa7036c-7c54-c49d-4203-7ac3c1e45d74" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:29.960 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" iface="eth0" netns="/var/run/netns/cni-8fa7036c-7c54-c49d-4203-7ac3c1e45d74" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:29.963 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" iface="eth0" netns="/var/run/netns/cni-8fa7036c-7c54-c49d-4203-7ac3c1e45d74" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:29.964 [INFO][4971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:29.965 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:30.429 [INFO][4995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:30.430 [INFO][4995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:30.430 [INFO][4995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:30.470 [WARNING][4995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:30.473 [INFO][4995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:30.521 [INFO][4995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:30.592275 containerd[2047]: 2025-02-13 16:06:30.561 [INFO][4971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:30.604150 containerd[2047]: time="2025-02-13T16:06:30.594317022Z" level=info msg="TearDown network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\" successfully" Feb 13 16:06:30.604150 containerd[2047]: time="2025-02-13T16:06:30.595755234Z" level=info msg="StopPodSandbox for \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\" returns successfully" Feb 13 16:06:30.605576 systemd[1]: run-netns-cni\x2d8fa7036c\x2d7c54\x2dc49d\x2d4203\x2d7ac3c1e45d74.mount: Deactivated successfully. Feb 13 16:06:30.615881 containerd[2047]: time="2025-02-13T16:06:30.608362410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2v6bn,Uid:f417de96-d005-4690-babe-3dd9712f90ee,Namespace:kube-system,Attempt:1,}" Feb 13 16:06:31.005449 containerd[2047]: time="2025-02-13T16:06:31.005312260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vhcmn,Uid:c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42,Namespace:kube-system,Attempt:1,} returns sandbox id \"2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295\"" Feb 13 16:06:31.029966 containerd[2047]: time="2025-02-13T16:06:31.027274048Z" level=info msg="CreateContainer within sandbox \"2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:31.197904 containerd[2047]: time="2025-02-13T16:06:31.197352629Z" level=info msg="CreateContainer within sandbox \"2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e99f52efaeef69fef4dc3b7393317d0cabe65d63a26b9e08e8445a585ba62ce\"" Feb 13 16:06:31.202524 containerd[2047]: time="2025-02-13T16:06:31.200999489Z" level=info msg="StartContainer for \"2e99f52efaeef69fef4dc3b7393317d0cabe65d63a26b9e08e8445a585ba62ce\"" Feb 13 16:06:31.206573 systemd-networkd[1607]: vxlan.calico: Gained IPv6LL Feb 13 16:06:31.537366 containerd[2047]: time="2025-02-13T16:06:31.536889331Z" level=info msg="StopPodSandbox for \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\"" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.037 [INFO][5099] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.039 [INFO][5099] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" iface="eth0" netns="/var/run/netns/cni-50e98534-7538-ac12-2e81-012d53e8e57b" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.040 [INFO][5099] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" iface="eth0" netns="/var/run/netns/cni-50e98534-7538-ac12-2e81-012d53e8e57b" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.041 [INFO][5099] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" iface="eth0" netns="/var/run/netns/cni-50e98534-7538-ac12-2e81-012d53e8e57b" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.041 [INFO][5099] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.041 [INFO][5099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.350 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.362 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.364 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.456 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.467 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.506 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:31.614375 containerd[2047]: 2025-02-13 16:06:31.569 [INFO][5099] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:31.629135 containerd[2047]: time="2025-02-13T16:06:31.628853911Z" level=info msg="TearDown network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\" successfully" Feb 13 16:06:31.633362 containerd[2047]: time="2025-02-13T16:06:31.629677879Z" level=info msg="StopPodSandbox for \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\" returns successfully" Feb 13 16:06:31.631199 systemd[1]: run-netns-cni\x2d50e98534\x2d7538\x2dac12\x2d2e81\x2d012d53e8e57b.mount: Deactivated successfully. Feb 13 16:06:31.645382 containerd[2047]: time="2025-02-13T16:06:31.643952911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hd4qw,Uid:a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef,Namespace:calico-system,Attempt:1,}" Feb 13 16:06:31.874584 (udev-worker)[5034]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:31.938997 systemd-networkd[1607]: calib10159def70: Link UP Feb 13 16:06:31.952110 systemd-networkd[1607]: calib10159def70: Gained carrier Feb 13 16:06:32.051067 containerd[2047]: time="2025-02-13T16:06:32.049208825Z" level=info msg="StartContainer for \"2e99f52efaeef69fef4dc3b7393317d0cabe65d63a26b9e08e8445a585ba62ce\" returns successfully" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.049 [INFO][5055] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0 calico-kube-controllers-57f4fd4464- calico-system f8b312d7-5730-4769-8e43-9048d8afafd5 819 0 2025-02-13 16:06:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57f4fd4464 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-49 calico-kube-controllers-57f4fd4464-94h72 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib10159def70 [] []}} ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.052 [INFO][5055] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.472 [INFO][5150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" HandleID="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.549 [INFO][5150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" HandleID="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000162700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-49", "pod":"calico-kube-controllers-57f4fd4464-94h72", "timestamp":"2025-02-13 16:06:31.472465794 +0000 UTC"}, Hostname:"ip-172-31-19-49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.550 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.550 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.550 [INFO][5150] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-49' Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.567 [INFO][5150] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.607 [INFO][5150] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.637 [INFO][5150] ipam/ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.658 [INFO][5150] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.670 [INFO][5150] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.671 [INFO][5150] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.686 [INFO][5150] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383 Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.732 [INFO][5150] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.758 [INFO][5150] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.130/26] block=192.168.65.128/26 handle="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.759 [INFO][5150] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.130/26] handle="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" host="ip-172-31-19-49" Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.759 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:32.053286 containerd[2047]: 2025-02-13 16:06:31.759 [INFO][5150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.130/26] IPv6=[] ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" HandleID="k8s-pod-network.d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:32.059388 containerd[2047]: 2025-02-13 16:06:31.845 [INFO][5055] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0", GenerateName:"calico-kube-controllers-57f4fd4464-", Namespace:"calico-system", SelfLink:"", UID:"f8b312d7-5730-4769-8e43-9048d8afafd5", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f4fd4464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"", Pod:"calico-kube-controllers-57f4fd4464-94h72", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib10159def70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:32.059388 containerd[2047]: 2025-02-13 16:06:31.846 [INFO][5055] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.130/32] ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:32.059388 containerd[2047]: 2025-02-13 16:06:31.846 [INFO][5055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib10159def70 ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:32.059388 containerd[2047]: 2025-02-13 16:06:31.959 [INFO][5055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:32.059388 containerd[2047]: 2025-02-13 16:06:31.963 [INFO][5055] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0", GenerateName:"calico-kube-controllers-57f4fd4464-", Namespace:"calico-system", SelfLink:"", UID:"f8b312d7-5730-4769-8e43-9048d8afafd5", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f4fd4464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383", Pod:"calico-kube-controllers-57f4fd4464-94h72", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib10159def70", MAC:"56:c9:27:4f:06:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:32.059388 containerd[2047]: 2025-02-13 16:06:32.010 [INFO][5055] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383" Namespace="calico-system" Pod="calico-kube-controllers-57f4fd4464-94h72" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:32.101746 systemd-networkd[1607]: calic2cf6ae5527: Gained IPv6LL Feb 13 16:06:32.201528 systemd-networkd[1607]: cali09afa656bb7: Link UP Feb 13 16:06:32.218367 systemd-networkd[1607]: cali09afa656bb7: Gained carrier Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.234 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.237 [INFO][5109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" iface="eth0" netns="/var/run/netns/cni-cf8989bf-56c7-3f09-79e8-a4fc02d108ff" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.239 [INFO][5109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" iface="eth0" netns="/var/run/netns/cni-cf8989bf-56c7-3f09-79e8-a4fc02d108ff" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.241 [INFO][5109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" iface="eth0" netns="/var/run/netns/cni-cf8989bf-56c7-3f09-79e8-a4fc02d108ff" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.241 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.241 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.930 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:31.950 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:32.097 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:32.235 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:32.236 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:32.265 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:32.297750 containerd[2047]: 2025-02-13 16:06:32.282 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:32.301991 containerd[2047]: time="2025-02-13T16:06:32.300594127Z" level=info msg="TearDown network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\" successfully" Feb 13 16:06:32.301991 containerd[2047]: time="2025-02-13T16:06:32.300671791Z" level=info msg="StopPodSandbox for \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\" returns successfully" Feb 13 16:06:32.303088 kubelet[3561]: I0213 16:06:32.303040 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vhcmn" podStartSLOduration=43.301940527 podStartE2EDuration="43.301940527s" podCreationTimestamp="2025-02-13 16:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:32.197911542 +0000 UTC m=+54.999523726" watchObservedRunningTime="2025-02-13 16:06:32.301940527 +0000 UTC m=+55.103552711" Feb 13 16:06:32.310733 containerd[2047]: time="2025-02-13T16:06:32.310647235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-mmd5c,Uid:715df74d-104a-4b7c-8355-d8e0a4d0f71b,Namespace:calico-apiserver,Attempt:1,}" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.162 [INFO][5098] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0 coredns-76f75df574- kube-system f417de96-d005-4690-babe-3dd9712f90ee 820 0 2025-02-13 16:05:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-49 coredns-76f75df574-2v6bn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09afa656bb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.167 [INFO][5098] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.819 [INFO][5160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" HandleID="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.896 [INFO][5160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" HandleID="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002edf70), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-49", "pod":"coredns-76f75df574-2v6bn", "timestamp":"2025-02-13 16:06:31.8191628 +0000 UTC"}, Hostname:"ip-172-31-19-49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.900 [INFO][5160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.901 [INFO][5160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.901 [INFO][5160] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-49' Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.914 [INFO][5160] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.941 [INFO][5160] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.969 [INFO][5160] ipam/ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:31.989 [INFO][5160] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.016 [INFO][5160] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.016 [INFO][5160] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.028 [INFO][5160] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406 Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.052 [INFO][5160] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.092 [INFO][5160] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.131/26] block=192.168.65.128/26 handle="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.094 [INFO][5160] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.131/26] handle="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" host="ip-172-31-19-49" Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.096 [INFO][5160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:32.321994 containerd[2047]: 2025-02-13 16:06:32.097 [INFO][5160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.131/26] IPv6=[] ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" HandleID="k8s-pod-network.d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:32.323142 containerd[2047]: 2025-02-13 16:06:32.151 [INFO][5098] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f417de96-d005-4690-babe-3dd9712f90ee", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"", Pod:"coredns-76f75df574-2v6bn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09afa656bb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:32.323142 containerd[2047]: 2025-02-13 16:06:32.156 [INFO][5098] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.131/32] ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:32.323142 containerd[2047]: 2025-02-13 16:06:32.156 [INFO][5098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09afa656bb7 ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:32.323142 containerd[2047]: 2025-02-13 16:06:32.244 [INFO][5098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:32.323142 containerd[2047]: 2025-02-13 16:06:32.272 [INFO][5098] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f417de96-d005-4690-babe-3dd9712f90ee", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406", Pod:"coredns-76f75df574-2v6bn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09afa656bb7", MAC:"8e:4f:6a:7a:13:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:32.323142 containerd[2047]: 2025-02-13 16:06:32.305 [INFO][5098] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406" Namespace="kube-system" Pod="coredns-76f75df574-2v6bn" WorkloadEndpoint="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:32.404727 systemd[1]: run-netns-cni\x2dcf8989bf\x2d56c7\x2d3f09\x2d79e8\x2da4fc02d108ff.mount: Deactivated successfully. Feb 13 16:06:32.531825 containerd[2047]: time="2025-02-13T16:06:32.485830339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:32.531825 containerd[2047]: time="2025-02-13T16:06:32.511178132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:32.531825 containerd[2047]: time="2025-02-13T16:06:32.511224176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.542555 containerd[2047]: time="2025-02-13T16:06:32.533246084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.659629 containerd[2047]: time="2025-02-13T16:06:32.659131796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:32.659629 containerd[2047]: time="2025-02-13T16:06:32.659238356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:32.659629 containerd[2047]: time="2025-02-13T16:06:32.659276336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.667743 containerd[2047]: time="2025-02-13T16:06:32.659495300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.678527 systemd-resolved[1940]: Under memory pressure, flushing caches. Feb 13 16:06:32.679411 systemd-resolved[1940]: Flushed all caches. Feb 13 16:06:32.682535 systemd-journald[1517]: Under memory pressure, flushing caches. Feb 13 16:06:33.040630 containerd[2047]: time="2025-02-13T16:06:33.040547850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f4fd4464-94h72,Uid:f8b312d7-5730-4769-8e43-9048d8afafd5,Namespace:calico-system,Attempt:1,} returns sandbox id \"d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383\"" Feb 13 16:06:33.056626 containerd[2047]: time="2025-02-13T16:06:33.056566854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 16:06:33.163475 containerd[2047]: time="2025-02-13T16:06:33.162710251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2v6bn,Uid:f417de96-d005-4690-babe-3dd9712f90ee,Namespace:kube-system,Attempt:1,} returns sandbox id \"d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406\"" Feb 13 16:06:33.194720 containerd[2047]: time="2025-02-13T16:06:33.194656951Z" level=info msg="CreateContainer within sandbox \"d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:32.473 [INFO][5224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:32.480 [INFO][5224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" iface="eth0" netns="/var/run/netns/cni-ed05ab96-95a3-44a0-ebba-bababc965c2a" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:32.504 [INFO][5224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" iface="eth0" netns="/var/run/netns/cni-ed05ab96-95a3-44a0-ebba-bababc965c2a" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:32.533 [INFO][5224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" iface="eth0" netns="/var/run/netns/cni-ed05ab96-95a3-44a0-ebba-bababc965c2a" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:32.533 [INFO][5224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:32.533 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:33.042 [INFO][5328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:33.045 [INFO][5328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:33.045 [INFO][5328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:33.112 [WARNING][5328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:33.112 [INFO][5328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:33.146 [INFO][5328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:33.203101 containerd[2047]: 2025-02-13 16:06:33.174 [INFO][5224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:33.214335 containerd[2047]: time="2025-02-13T16:06:33.213029995Z" level=info msg="TearDown network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\" successfully" Feb 13 16:06:33.214335 containerd[2047]: time="2025-02-13T16:06:33.213928315Z" level=info msg="StopPodSandbox for \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\" returns successfully" Feb 13 16:06:33.223729 containerd[2047]: time="2025-02-13T16:06:33.218370823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-kzwvn,Uid:7fdafe03-6c5b-493a-8ab3-33c001bd2fdc,Namespace:calico-apiserver,Attempt:1,}" Feb 13 16:06:33.288255 containerd[2047]: time="2025-02-13T16:06:33.288199927Z" level=info msg="CreateContainer within sandbox \"d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be0b346d1f0864e47fd7e7e2181a72d4363098c15be7f264476132eb3804e4e3\"" Feb 13 16:06:33.307140 containerd[2047]: time="2025-02-13T16:06:33.306970172Z" level=info msg="StartContainer for \"be0b346d1f0864e47fd7e7e2181a72d4363098c15be7f264476132eb3804e4e3\"" Feb 13 16:06:33.403597 systemd[1]: run-netns-cni\x2ded05ab96\x2d95a3\x2d44a0\x2debba\x2dbababc965c2a.mount: Deactivated successfully. Feb 13 16:06:33.488526 systemd-networkd[1607]: cali5c2fc1b2ed4: Link UP Feb 13 16:06:33.505721 systemd-networkd[1607]: cali5c2fc1b2ed4: Gained carrier Feb 13 16:06:33.512134 systemd-networkd[1607]: cali09afa656bb7: Gained IPv6LL Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:32.511 [INFO][5232] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0 csi-node-driver- calico-system a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef 828 0 2025-02-13 16:06:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-49 csi-node-driver-hd4qw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c2fc1b2ed4 [] []}} ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:32.512 [INFO][5232] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.096 [INFO][5358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" HandleID="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.192 [INFO][5358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" HandleID="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000279a50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-49", "pod":"csi-node-driver-hd4qw", "timestamp":"2025-02-13 16:06:33.096353107 +0000 UTC"}, Hostname:"ip-172-31-19-49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.193 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.193 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.193 [INFO][5358] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-49' Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.220 [INFO][5358] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.290 [INFO][5358] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.326 [INFO][5358] ipam/ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.334 [INFO][5358] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.343 [INFO][5358] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.344 [INFO][5358] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.348 [INFO][5358] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.363 [INFO][5358] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.396 [INFO][5358] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.132/26] block=192.168.65.128/26 handle="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.397 [INFO][5358] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.132/26] handle="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" host="ip-172-31-19-49" Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.397 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:33.563848 containerd[2047]: 2025-02-13 16:06:33.397 [INFO][5358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.132/26] IPv6=[] ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" HandleID="k8s-pod-network.461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:33.571760 containerd[2047]: 2025-02-13 16:06:33.434 [INFO][5232] cni-plugin/k8s.go 386: Populated endpoint ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"", Pod:"csi-node-driver-hd4qw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c2fc1b2ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:33.571760 containerd[2047]: 2025-02-13 16:06:33.440 [INFO][5232] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.132/32] ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:33.571760 containerd[2047]: 2025-02-13 16:06:33.440 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c2fc1b2ed4 ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:33.571760 containerd[2047]: 2025-02-13 16:06:33.489 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:33.571760 containerd[2047]: 2025-02-13 16:06:33.490 [INFO][5232] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e", Pod:"csi-node-driver-hd4qw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c2fc1b2ed4", MAC:"e6:34:e1:18:d7:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:33.571760 containerd[2047]: 2025-02-13 16:06:33.527 [INFO][5232] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e" Namespace="calico-system" Pod="csi-node-driver-hd4qw" WorkloadEndpoint="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:33.678011 containerd[2047]: time="2025-02-13T16:06:33.676249413Z" level=info msg="StartContainer for \"be0b346d1f0864e47fd7e7e2181a72d4363098c15be7f264476132eb3804e4e3\" returns successfully" Feb 13 16:06:33.766152 containerd[2047]: time="2025-02-13T16:06:33.765172942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:33.766152 containerd[2047]: time="2025-02-13T16:06:33.765300682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:33.766152 containerd[2047]: time="2025-02-13T16:06:33.765327970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:33.766152 containerd[2047]: time="2025-02-13T16:06:33.765547786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:33.826322 systemd-networkd[1607]: cali74f379ee44a: Link UP Feb 13 16:06:33.831952 systemd-networkd[1607]: cali74f379ee44a: Gained carrier Feb 13 16:06:33.894543 systemd-networkd[1607]: calib10159def70: Gained IPv6LL Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.153 [INFO][5351] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0 calico-apiserver-6b77f6fc57- calico-apiserver 715df74d-104a-4b7c-8355-d8e0a4d0f71b 831 0 2025-02-13 16:05:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b77f6fc57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-49 calico-apiserver-6b77f6fc57-mmd5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74f379ee44a [] []}} ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.153 [INFO][5351] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.576 [INFO][5407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" HandleID="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.627 [INFO][5407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" HandleID="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002612f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-49", "pod":"calico-apiserver-6b77f6fc57-mmd5c", "timestamp":"2025-02-13 16:06:33.576679881 +0000 UTC"}, Hostname:"ip-172-31-19-49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.628 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.628 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.629 [INFO][5407] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-49' Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.647 [INFO][5407] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.662 [INFO][5407] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.679 [INFO][5407] ipam/ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.689 [INFO][5407] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.710 [INFO][5407] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.711 [INFO][5407] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.722 [INFO][5407] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.736 [INFO][5407] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.763 [INFO][5407] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.133/26] block=192.168.65.128/26 handle="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.763 [INFO][5407] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.133/26] handle="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" host="ip-172-31-19-49" Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.763 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:33.906504 containerd[2047]: 2025-02-13 16:06:33.763 [INFO][5407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.133/26] IPv6=[] ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" HandleID="k8s-pod-network.92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:33.928778 containerd[2047]: 2025-02-13 16:06:33.773 [INFO][5351] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"715df74d-104a-4b7c-8355-d8e0a4d0f71b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"", Pod:"calico-apiserver-6b77f6fc57-mmd5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74f379ee44a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:33.928778 containerd[2047]: 2025-02-13 16:06:33.774 [INFO][5351] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.133/32] ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:33.928778 containerd[2047]: 2025-02-13 16:06:33.782 [INFO][5351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74f379ee44a ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:33.928778 containerd[2047]: 2025-02-13 16:06:33.834 [INFO][5351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:33.928778 containerd[2047]: 2025-02-13 16:06:33.847 [INFO][5351] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"715df74d-104a-4b7c-8355-d8e0a4d0f71b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e", Pod:"calico-apiserver-6b77f6fc57-mmd5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74f379ee44a", MAC:"c2:34:80:dd:17:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:33.928778 containerd[2047]: 2025-02-13 16:06:33.888 [INFO][5351] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-mmd5c" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:34.031855 containerd[2047]: time="2025-02-13T16:06:34.031760779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hd4qw,Uid:a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef,Namespace:calico-system,Attempt:1,} returns sandbox id \"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e\"" Feb 13 16:06:34.056795 containerd[2047]: time="2025-02-13T16:06:34.055571623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:34.056795 containerd[2047]: time="2025-02-13T16:06:34.055907587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:34.056795 containerd[2047]: time="2025-02-13T16:06:34.055986415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:34.056795 containerd[2047]: time="2025-02-13T16:06:34.056399359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:34.128180 systemd-networkd[1607]: cali2dadb94e0e3: Link UP Feb 13 16:06:34.134614 systemd-networkd[1607]: cali2dadb94e0e3: Gained carrier Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:33.721 [INFO][5412] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0 calico-apiserver-6b77f6fc57- calico-apiserver 7fdafe03-6c5b-493a-8ab3-33c001bd2fdc 853 0 2025-02-13 16:05:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b77f6fc57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-49 calico-apiserver-6b77f6fc57-kzwvn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2dadb94e0e3 [] []}} ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:33.721 [INFO][5412] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.004 [INFO][5493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" HandleID="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.027 [INFO][5493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" HandleID="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037f220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-49", "pod":"calico-apiserver-6b77f6fc57-kzwvn", "timestamp":"2025-02-13 16:06:34.004818487 +0000 UTC"}, Hostname:"ip-172-31-19-49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.027 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.027 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.027 [INFO][5493] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-49' Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.033 [INFO][5493] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.041 [INFO][5493] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.050 [INFO][5493] ipam/ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.054 [INFO][5493] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.059 [INFO][5493] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.059 [INFO][5493] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.062 [INFO][5493] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.073 [INFO][5493] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.089 [INFO][5493] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.134/26] block=192.168.65.128/26 handle="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.089 [INFO][5493] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.134/26] handle="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" host="ip-172-31-19-49" Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.089 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:34.191979 containerd[2047]: 2025-02-13 16:06:34.090 [INFO][5493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.134/26] IPv6=[] ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" HandleID="k8s-pod-network.07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:34.193171 containerd[2047]: 2025-02-13 16:06:34.094 [INFO][5412] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"", Pod:"calico-apiserver-6b77f6fc57-kzwvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dadb94e0e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:34.193171 containerd[2047]: 2025-02-13 16:06:34.095 [INFO][5412] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.134/32] ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:34.193171 containerd[2047]: 2025-02-13 16:06:34.095 [INFO][5412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2dadb94e0e3 ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:34.193171 containerd[2047]: 2025-02-13 16:06:34.138 [INFO][5412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:34.193171 containerd[2047]: 2025-02-13 16:06:34.140 [INFO][5412] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e", Pod:"calico-apiserver-6b77f6fc57-kzwvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dadb94e0e3", MAC:"3a:38:7a:14:b7:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:34.193171 containerd[2047]: 2025-02-13 16:06:34.179 [INFO][5412] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e" Namespace="calico-apiserver" Pod="calico-apiserver-6b77f6fc57-kzwvn" WorkloadEndpoint="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:34.273455 containerd[2047]: time="2025-02-13T16:06:34.271748852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-mmd5c,Uid:715df74d-104a-4b7c-8355-d8e0a4d0f71b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e\"" Feb 13 16:06:34.279459 kubelet[3561]: I0213 16:06:34.277172 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2v6bn" podStartSLOduration=45.27710858 podStartE2EDuration="45.27710858s" podCreationTimestamp="2025-02-13 16:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:34.236566724 +0000 UTC m=+57.038178920" watchObservedRunningTime="2025-02-13 16:06:34.27710858 +0000 UTC m=+57.078720752" Feb 13 16:06:34.302057 systemd[1]: Started sshd@9-172.31.19.49:22-139.178.68.195:38276.service - OpenSSH per-connection server daemon (139.178.68.195:38276). Feb 13 16:06:34.373874 containerd[2047]: time="2025-02-13T16:06:34.373536321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:34.373874 containerd[2047]: time="2025-02-13T16:06:34.373661661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:34.373874 containerd[2047]: time="2025-02-13T16:06:34.373699389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:34.375009 containerd[2047]: time="2025-02-13T16:06:34.374905377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:34.519156 containerd[2047]: time="2025-02-13T16:06:34.512081746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77f6fc57-kzwvn,Uid:7fdafe03-6c5b-493a-8ab3-33c001bd2fdc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e\"" Feb 13 16:06:34.532193 sshd[5596]: Accepted publickey for core from 139.178.68.195 port 38276 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:34.533617 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:34.543487 systemd-logind[2020]: New session 10 of user core. Feb 13 16:06:34.551193 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:06:34.869692 sshd[5596]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:34.881017 systemd[1]: sshd@9-172.31.19.49:22-139.178.68.195:38276.service: Deactivated successfully. Feb 13 16:06:34.881890 systemd-logind[2020]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:06:34.893125 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:06:34.909035 systemd[1]: Started sshd@10-172.31.19.49:22-139.178.68.195:38282.service - OpenSSH per-connection server daemon (139.178.68.195:38282). Feb 13 16:06:34.911687 systemd-logind[2020]: Removed session 10. Feb 13 16:06:35.113540 sshd[5653]: Accepted publickey for core from 139.178.68.195 port 38282 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:35.119407 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:35.157827 systemd-logind[2020]: New session 11 of user core. Feb 13 16:06:35.167073 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:06:35.238509 systemd-networkd[1607]: cali74f379ee44a: Gained IPv6LL Feb 13 16:06:35.303605 systemd-networkd[1607]: cali2dadb94e0e3: Gained IPv6LL Feb 13 16:06:35.365783 systemd-networkd[1607]: cali5c2fc1b2ed4: Gained IPv6LL Feb 13 16:06:35.792653 sshd[5653]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:35.812168 systemd[1]: sshd@10-172.31.19.49:22-139.178.68.195:38282.service: Deactivated successfully. Feb 13 16:06:35.832504 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:06:35.854110 systemd-logind[2020]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:06:35.869225 systemd[1]: Started sshd@11-172.31.19.49:22-139.178.68.195:38294.service - OpenSSH per-connection server daemon (139.178.68.195:38294). Feb 13 16:06:35.882690 systemd-logind[2020]: Removed session 11. Feb 13 16:06:36.113786 sshd[5669]: Accepted publickey for core from 139.178.68.195 port 38294 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:36.119387 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:36.139030 systemd-logind[2020]: New session 12 of user core. Feb 13 16:06:36.145289 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:06:36.480792 sshd[5669]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:36.492211 systemd[1]: sshd@11-172.31.19.49:22-139.178.68.195:38294.service: Deactivated successfully. Feb 13 16:06:36.509518 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:06:36.509796 systemd-logind[2020]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:06:36.513644 systemd-logind[2020]: Removed session 12. Feb 13 16:06:36.599362 containerd[2047]: time="2025-02-13T16:06:36.599262792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:36.601555 containerd[2047]: time="2025-02-13T16:06:36.601480860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 16:06:36.603495 containerd[2047]: time="2025-02-13T16:06:36.603391620Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:36.609816 containerd[2047]: time="2025-02-13T16:06:36.609759504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:36.611599 containerd[2047]: time="2025-02-13T16:06:36.611326308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.554419866s" Feb 13 16:06:36.611599 containerd[2047]: time="2025-02-13T16:06:36.611389596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 16:06:36.613702 containerd[2047]: time="2025-02-13T16:06:36.612087708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 16:06:36.644497 containerd[2047]: time="2025-02-13T16:06:36.642949572Z" level=info msg="CreateContainer within sandbox \"d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 16:06:36.674829 containerd[2047]: time="2025-02-13T16:06:36.674134968Z" level=info msg="CreateContainer within sandbox \"d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"332c58b2694252ff447ff4929edbfc0f89bcc05ef5a2cb08b6a2c994ba24464c\"" Feb 13 16:06:36.678637 containerd[2047]: time="2025-02-13T16:06:36.678579336Z" level=info msg="StartContainer for \"332c58b2694252ff447ff4929edbfc0f89bcc05ef5a2cb08b6a2c994ba24464c\"" Feb 13 16:06:36.856010 containerd[2047]: time="2025-02-13T16:06:36.855875701Z" level=info msg="StartContainer for \"332c58b2694252ff447ff4929edbfc0f89bcc05ef5a2cb08b6a2c994ba24464c\" returns successfully" Feb 13 16:06:37.344909 kubelet[3561]: I0213 16:06:37.344527 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57f4fd4464-94h72" podStartSLOduration=33.781338142 podStartE2EDuration="37.34446318s" podCreationTimestamp="2025-02-13 16:06:00 +0000 UTC" firstStartedPulling="2025-02-13 16:06:33.04873011 +0000 UTC m=+55.850342282" lastFinishedPulling="2025-02-13 16:06:36.61185516 +0000 UTC m=+59.413467320" observedRunningTime="2025-02-13 16:06:37.269944403 +0000 UTC m=+60.071556695" watchObservedRunningTime="2025-02-13 16:06:37.34446318 +0000 UTC m=+60.146075364" Feb 13 16:06:37.494357 containerd[2047]: time="2025-02-13T16:06:37.494243448Z" level=info msg="StopPodSandbox for \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\"" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.584 [WARNING][5757] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f417de96-d005-4690-babe-3dd9712f90ee", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406", Pod:"coredns-76f75df574-2v6bn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09afa656bb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.585 [INFO][5757] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.585 [INFO][5757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" iface="eth0" netns="" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.585 [INFO][5757] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.585 [INFO][5757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.641 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.641 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.641 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.658 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.658 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.661 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.668722 containerd[2047]: 2025-02-13 16:06:37.664 [INFO][5757] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.671522 containerd[2047]: time="2025-02-13T16:06:37.668649001Z" level=info msg="TearDown network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\" successfully" Feb 13 16:06:37.671522 containerd[2047]: time="2025-02-13T16:06:37.669021697Z" level=info msg="StopPodSandbox for \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\" returns successfully" Feb 13 16:06:37.671522 containerd[2047]: time="2025-02-13T16:06:37.670594969Z" level=info msg="RemovePodSandbox for \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\"" Feb 13 16:06:37.671522 containerd[2047]: time="2025-02-13T16:06:37.670680865Z" level=info msg="Forcibly stopping sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\"" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.752 [WARNING][5784] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f417de96-d005-4690-babe-3dd9712f90ee", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"d8c9286835bc7a8ba32d88eece0f6d92f05a0450caa5c9426313c897ba932406", Pod:"coredns-76f75df574-2v6bn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09afa656bb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.758 [INFO][5784] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.758 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" iface="eth0" netns="" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.758 [INFO][5784] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.758 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.809 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.810 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.810 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.823 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.823 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" HandleID="k8s-pod-network.678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--2v6bn-eth0" Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.826 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.832243 containerd[2047]: 2025-02-13 16:06:37.829 [INFO][5784] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da" Feb 13 16:06:37.834129 containerd[2047]: time="2025-02-13T16:06:37.832302926Z" level=info msg="TearDown network for sandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\" successfully" Feb 13 16:06:37.840113 containerd[2047]: time="2025-02-13T16:06:37.840051734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:37.840252 containerd[2047]: time="2025-02-13T16:06:37.840155702Z" level=info msg="RemovePodSandbox \"678ab46bad1f58d3e33f7eebce2d7a208be796760d648f00ffe12938597ed2da\" returns successfully" Feb 13 16:06:37.841502 containerd[2047]: time="2025-02-13T16:06:37.841283174Z" level=info msg="StopPodSandbox for \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\"" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.923 [WARNING][5810] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"715df74d-104a-4b7c-8355-d8e0a4d0f71b", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e", Pod:"calico-apiserver-6b77f6fc57-mmd5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74f379ee44a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.923 [INFO][5810] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.923 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" iface="eth0" netns="" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.923 [INFO][5810] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.923 [INFO][5810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.972 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.972 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.972 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.986 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.986 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.989 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:37.995694 containerd[2047]: 2025-02-13 16:06:37.991 [INFO][5810] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:37.995694 containerd[2047]: time="2025-02-13T16:06:37.995494095Z" level=info msg="TearDown network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\" successfully" Feb 13 16:06:37.995694 containerd[2047]: time="2025-02-13T16:06:37.995568099Z" level=info msg="StopPodSandbox for \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\" returns successfully" Feb 13 16:06:38.001106 containerd[2047]: time="2025-02-13T16:06:37.997451019Z" level=info msg="RemovePodSandbox for \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\"" Feb 13 16:06:38.001106 containerd[2047]: time="2025-02-13T16:06:37.997502739Z" level=info msg="Forcibly stopping sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\"" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.081 [WARNING][5834] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"715df74d-104a-4b7c-8355-d8e0a4d0f71b", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e", Pod:"calico-apiserver-6b77f6fc57-mmd5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74f379ee44a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.082 [INFO][5834] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.082 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" iface="eth0" netns="" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.082 [INFO][5834] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.082 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.155 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.155 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.156 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.171 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.171 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" HandleID="k8s-pod-network.8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--mmd5c-eth0" Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.173 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:38.180213 containerd[2047]: 2025-02-13 16:06:38.176 [INFO][5834] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c" Feb 13 16:06:38.180213 containerd[2047]: time="2025-02-13T16:06:38.181334904Z" level=info msg="TearDown network for sandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\" successfully" Feb 13 16:06:38.199516 containerd[2047]: time="2025-02-13T16:06:38.199118688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:38.199516 containerd[2047]: time="2025-02-13T16:06:38.199220844Z" level=info msg="RemovePodSandbox \"8f60001f41caea8b15f1e82dafdcd9e7d666be400b4d459e98cbff490e0eb01c\" returns successfully" Feb 13 16:06:38.205186 containerd[2047]: time="2025-02-13T16:06:38.203357112Z" level=info msg="StopPodSandbox for \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\"" Feb 13 16:06:38.215856 ntpd[2002]: Listen normally on 6 vxlan.calico 192.168.65.128:123 Feb 13 16:06:38.216839 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 6 vxlan.calico 192.168.65.128:123 Feb 13 16:06:38.216839 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 7 vxlan.calico [fe80::6432:b0ff:feba:50f2%4]:123 Feb 13 16:06:38.216839 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 8 calic2cf6ae5527 [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 16:06:38.216839 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 9 calib10159def70 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 16:06:38.216839 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 10 cali09afa656bb7 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 16:06:38.216839 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 11 cali5c2fc1b2ed4 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 16:06:38.216839 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 12 cali74f379ee44a [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 16:06:38.215993 ntpd[2002]: Listen normally on 7 vxlan.calico [fe80::6432:b0ff:feba:50f2%4]:123 Feb 13 16:06:38.217246 ntpd[2002]: 13 Feb 16:06:38 ntpd[2002]: Listen normally on 13 cali2dadb94e0e3 [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 16:06:38.216077 ntpd[2002]: Listen normally on 8 calic2cf6ae5527 [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 16:06:38.216146 ntpd[2002]: Listen normally on 9 calib10159def70 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 16:06:38.216214 ntpd[2002]: Listen normally on 10 cali09afa656bb7 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 16:06:38.216281 ntpd[2002]: Listen normally on 11 cali5c2fc1b2ed4 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 16:06:38.216371 ntpd[2002]: Listen normally on 12 cali74f379ee44a [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 16:06:38.217087 ntpd[2002]: Listen normally on 13 cali2dadb94e0e3 [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.396 [WARNING][5863] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e", Pod:"calico-apiserver-6b77f6fc57-kzwvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dadb94e0e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.397 [INFO][5863] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.397 [INFO][5863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" iface="eth0" netns="" Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.397 [INFO][5863] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.397 [INFO][5863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.496 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.496 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.496 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.518 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.520 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.528 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:38.543211 containerd[2047]: 2025-02-13 16:06:38.535 [INFO][5863] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.544992 containerd[2047]: time="2025-02-13T16:06:38.543393818Z" level=info msg="TearDown network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\" successfully" Feb 13 16:06:38.544992 containerd[2047]: time="2025-02-13T16:06:38.544013462Z" level=info msg="StopPodSandbox for \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\" returns successfully" Feb 13 16:06:38.546258 containerd[2047]: time="2025-02-13T16:06:38.545967062Z" level=info msg="RemovePodSandbox for \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\"" Feb 13 16:06:38.546258 containerd[2047]: time="2025-02-13T16:06:38.546073466Z" level=info msg="Forcibly stopping sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\"" Feb 13 16:06:38.633589 containerd[2047]: time="2025-02-13T16:06:38.633034442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:38.634653 containerd[2047]: time="2025-02-13T16:06:38.634595678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 16:06:38.641562 containerd[2047]: time="2025-02-13T16:06:38.641468078Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:38.647941 containerd[2047]: time="2025-02-13T16:06:38.647879414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:38.654613 containerd[2047]: time="2025-02-13T16:06:38.649675634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.037531226s" Feb 13 16:06:38.654613 containerd[2047]: time="2025-02-13T16:06:38.649743542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 16:06:38.655923 containerd[2047]: time="2025-02-13T16:06:38.655624094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 16:06:38.660955 containerd[2047]: time="2025-02-13T16:06:38.660844694Z" level=info msg="CreateContainer within sandbox \"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 16:06:38.702465 containerd[2047]: time="2025-02-13T16:06:38.702183914Z" level=info msg="CreateContainer within sandbox \"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2881d817af8c45b6aac0c6388f8729ff26b3e72727ae688d437f22d27c64aabb\"" Feb 13 16:06:38.706851 containerd[2047]: time="2025-02-13T16:06:38.705511070Z" level=info msg="StartContainer for \"2881d817af8c45b6aac0c6388f8729ff26b3e72727ae688d437f22d27c64aabb\"" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.679 [WARNING][5889] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0", GenerateName:"calico-apiserver-6b77f6fc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fdafe03-6c5b-493a-8ab3-33c001bd2fdc", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77f6fc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e", Pod:"calico-apiserver-6b77f6fc57-kzwvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2dadb94e0e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.679 [INFO][5889] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.679 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" iface="eth0" netns="" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.679 [INFO][5889] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.679 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.762 [INFO][5897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.762 [INFO][5897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.763 [INFO][5897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.782 [WARNING][5897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.782 [INFO][5897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" HandleID="k8s-pod-network.199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Workload="ip--172--31--19--49-k8s-calico--apiserver--6b77f6fc57--kzwvn-eth0" Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.794 [INFO][5897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:38.811698 containerd[2047]: 2025-02-13 16:06:38.806 [INFO][5889] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905" Feb 13 16:06:38.813723 containerd[2047]: time="2025-02-13T16:06:38.811703931Z" level=info msg="TearDown network for sandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\" successfully" Feb 13 16:06:38.820511 containerd[2047]: time="2025-02-13T16:06:38.820409331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:38.820663 containerd[2047]: time="2025-02-13T16:06:38.820582887Z" level=info msg="RemovePodSandbox \"199bc26cea20ae051c25e8a34ec93c5b3edeecb211e874632aee29a067164905\" returns successfully" Feb 13 16:06:38.821461 containerd[2047]: time="2025-02-13T16:06:38.821379327Z" level=info msg="StopPodSandbox for \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\"" Feb 13 16:06:38.914561 containerd[2047]: time="2025-02-13T16:06:38.914316555Z" level=info msg="StartContainer for \"2881d817af8c45b6aac0c6388f8729ff26b3e72727ae688d437f22d27c64aabb\" returns successfully" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.936 [WARNING][5939] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0", GenerateName:"calico-kube-controllers-57f4fd4464-", Namespace:"calico-system", SelfLink:"", UID:"f8b312d7-5730-4769-8e43-9048d8afafd5", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f4fd4464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383", Pod:"calico-kube-controllers-57f4fd4464-94h72", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib10159def70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.937 [INFO][5939] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.937 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" iface="eth0" netns="" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.937 [INFO][5939] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.937 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.982 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.982 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:38.982 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:39.001 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:39.001 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:39.003 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:39.010234 containerd[2047]: 2025-02-13 16:06:39.007 [INFO][5939] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.010234 containerd[2047]: time="2025-02-13T16:06:39.010120632Z" level=info msg="TearDown network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\" successfully" Feb 13 16:06:39.010234 containerd[2047]: time="2025-02-13T16:06:39.010163556Z" level=info msg="StopPodSandbox for \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\" returns successfully" Feb 13 16:06:39.012763 containerd[2047]: time="2025-02-13T16:06:39.011743368Z" level=info msg="RemovePodSandbox for \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\"" Feb 13 16:06:39.012763 containerd[2047]: time="2025-02-13T16:06:39.012286620Z" level=info msg="Forcibly stopping sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\"" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.119 [WARNING][5973] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0", GenerateName:"calico-kube-controllers-57f4fd4464-", Namespace:"calico-system", SelfLink:"", UID:"f8b312d7-5730-4769-8e43-9048d8afafd5", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f4fd4464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"d5a489727b5f9c6c5ba571d9d0011d6186218f2a030ed1f2a0bb06b775da0383", Pod:"calico-kube-controllers-57f4fd4464-94h72", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib10159def70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.120 [INFO][5973] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.121 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" iface="eth0" netns="" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.121 [INFO][5973] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.123 [INFO][5973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.209 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.209 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.209 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.229 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.229 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" HandleID="k8s-pod-network.d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Workload="ip--172--31--19--49-k8s-calico--kube--controllers--57f4fd4464--94h72-eth0" Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.232 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:39.247669 containerd[2047]: 2025-02-13 16:06:39.240 [INFO][5973] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa" Feb 13 16:06:39.247669 containerd[2047]: time="2025-02-13T16:06:39.247606129Z" level=info msg="TearDown network for sandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\" successfully" Feb 13 16:06:39.257000 containerd[2047]: time="2025-02-13T16:06:39.256352425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:39.257000 containerd[2047]: time="2025-02-13T16:06:39.256484449Z" level=info msg="RemovePodSandbox \"d3d55815b49a20889ba29447e18e5dc0a6354d8859eec237f1f917c03aaf73aa\" returns successfully" Feb 13 16:06:39.258575 containerd[2047]: time="2025-02-13T16:06:39.258036505Z" level=info msg="StopPodSandbox for \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\"" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.372 [WARNING][6019] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e", Pod:"csi-node-driver-hd4qw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c2fc1b2ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.373 [INFO][6019] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.373 [INFO][6019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" iface="eth0" netns="" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.373 [INFO][6019] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.373 [INFO][6019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.413 [INFO][6027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.413 [INFO][6027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.413 [INFO][6027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.426 [WARNING][6027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.427 [INFO][6027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.432 [INFO][6027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:39.440210 containerd[2047]: 2025-02-13 16:06:39.436 [INFO][6019] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.441950 containerd[2047]: time="2025-02-13T16:06:39.440669594Z" level=info msg="TearDown network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\" successfully" Feb 13 16:06:39.441950 containerd[2047]: time="2025-02-13T16:06:39.441606530Z" level=info msg="StopPodSandbox for \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\" returns successfully" Feb 13 16:06:39.443145 containerd[2047]: time="2025-02-13T16:06:39.442516934Z" level=info msg="RemovePodSandbox for \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\"" Feb 13 16:06:39.443145 containerd[2047]: time="2025-02-13T16:06:39.442581770Z" level=info msg="Forcibly stopping sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\"" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.532 [WARNING][6045] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8ff4d25-2882-4f7f-8fc6-343fb3ae7aef", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 6, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e", Pod:"csi-node-driver-hd4qw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c2fc1b2ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.533 [INFO][6045] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.533 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" iface="eth0" netns="" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.533 [INFO][6045] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.533 [INFO][6045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.588 [INFO][6051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.589 [INFO][6051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.589 [INFO][6051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.603 [WARNING][6051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.603 [INFO][6051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" HandleID="k8s-pod-network.182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Workload="ip--172--31--19--49-k8s-csi--node--driver--hd4qw-eth0" Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.606 [INFO][6051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:39.613211 containerd[2047]: 2025-02-13 16:06:39.609 [INFO][6045] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560" Feb 13 16:06:39.614483 containerd[2047]: time="2025-02-13T16:06:39.613442295Z" level=info msg="TearDown network for sandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\" successfully" Feb 13 16:06:39.620840 containerd[2047]: time="2025-02-13T16:06:39.620714823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:39.621153 containerd[2047]: time="2025-02-13T16:06:39.620852319Z" level=info msg="RemovePodSandbox \"182c22c5dd9099514b5e5ed64f514dbce33249049f93b684cac64f3cf80cc560\" returns successfully" Feb 13 16:06:39.622263 containerd[2047]: time="2025-02-13T16:06:39.621800907Z" level=info msg="StopPodSandbox for \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\"" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.716 [WARNING][6069] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295", Pod:"coredns-76f75df574-vhcmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2cf6ae5527", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.717 [INFO][6069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.717 [INFO][6069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" iface="eth0" netns="" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.717 [INFO][6069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.717 [INFO][6069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.760 [INFO][6076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.760 [INFO][6076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.760 [INFO][6076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.777 [WARNING][6076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.777 [INFO][6076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.780 [INFO][6076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:39.787766 containerd[2047]: 2025-02-13 16:06:39.783 [INFO][6069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.790264 containerd[2047]: time="2025-02-13T16:06:39.787810804Z" level=info msg="TearDown network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\" successfully" Feb 13 16:06:39.790264 containerd[2047]: time="2025-02-13T16:06:39.787851736Z" level=info msg="StopPodSandbox for \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\" returns successfully" Feb 13 16:06:39.790264 containerd[2047]: time="2025-02-13T16:06:39.789897040Z" level=info msg="RemovePodSandbox for \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\"" Feb 13 16:06:39.790264 containerd[2047]: time="2025-02-13T16:06:39.790131088Z" level=info msg="Forcibly stopping sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\"" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.881 [WARNING][6095] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c7a2c5f9-1a8b-4511-a4cd-26dfb3ccbe42", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 5, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-49", ContainerID:"2e5bb74cf4363ca91270f71c0b06c32f8632455eec2f4299b0cf940c5a538295", Pod:"coredns-76f75df574-vhcmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2cf6ae5527", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.882 [INFO][6095] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.882 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" iface="eth0" netns="" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.882 [INFO][6095] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.882 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.923 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.923 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.923 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.940 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.940 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" HandleID="k8s-pod-network.a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Workload="ip--172--31--19--49-k8s-coredns--76f75df574--vhcmn-eth0" Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.943 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:06:39.951406 containerd[2047]: 2025-02-13 16:06:39.948 [INFO][6095] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8" Feb 13 16:06:39.951406 containerd[2047]: time="2025-02-13T16:06:39.951346541Z" level=info msg="TearDown network for sandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\" successfully" Feb 13 16:06:39.960760 containerd[2047]: time="2025-02-13T16:06:39.960474737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:06:39.960760 containerd[2047]: time="2025-02-13T16:06:39.960591485Z" level=info msg="RemovePodSandbox \"a6b207dfc8594cc9dc6de37e40b81c9d025fd1829c33b31f93cfe487dff8d1e8\" returns successfully" Feb 13 16:06:41.530664 systemd[1]: Started sshd@12-172.31.19.49:22-139.178.68.195:47358.service - OpenSSH per-connection server daemon (139.178.68.195:47358). Feb 13 16:06:41.755286 sshd[6114]: Accepted publickey for core from 139.178.68.195 port 47358 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:41.759897 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:41.772004 systemd-logind[2020]: New session 13 of user core. Feb 13 16:06:41.781554 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:06:41.785636 containerd[2047]: time="2025-02-13T16:06:41.784571070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:41.791275 containerd[2047]: time="2025-02-13T16:06:41.791202930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 16:06:41.794285 containerd[2047]: time="2025-02-13T16:06:41.794205702Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:41.803018 containerd[2047]: time="2025-02-13T16:06:41.802927974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:41.806735 containerd[2047]: time="2025-02-13T16:06:41.806642718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.150944716s" Feb 13 16:06:41.806735 containerd[2047]: time="2025-02-13T16:06:41.806730990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 16:06:41.810339 containerd[2047]: time="2025-02-13T16:06:41.809684610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 16:06:41.811569 containerd[2047]: time="2025-02-13T16:06:41.811462830Z" level=info msg="CreateContainer within sandbox \"92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 16:06:41.857092 containerd[2047]: time="2025-02-13T16:06:41.857016942Z" level=info msg="CreateContainer within sandbox \"92335ddc25e17aa3135ad4bd10e8342f4aff53220d846495fb68bebbf696378e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b356d89afd7a990b52c9744293f9c3bc05a1839b1e59d86d5ed2b4ec15a7b4e\"" Feb 13 16:06:41.862750 containerd[2047]: time="2025-02-13T16:06:41.862686030Z" level=info msg="StartContainer for \"5b356d89afd7a990b52c9744293f9c3bc05a1839b1e59d86d5ed2b4ec15a7b4e\"" Feb 13 16:06:42.146562 containerd[2047]: time="2025-02-13T16:06:42.145044363Z" level=info msg="StartContainer for \"5b356d89afd7a990b52c9744293f9c3bc05a1839b1e59d86d5ed2b4ec15a7b4e\" returns successfully" Feb 13 16:06:42.282886 sshd[6114]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:42.288051 containerd[2047]: time="2025-02-13T16:06:42.287860588Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:42.291240 containerd[2047]: time="2025-02-13T16:06:42.291058984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 16:06:42.295987 systemd-logind[2020]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:06:42.303572 systemd[1]: sshd@12-172.31.19.49:22-139.178.68.195:47358.service: Deactivated successfully. Feb 13 16:06:42.313615 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:06:42.318606 systemd-logind[2020]: Removed session 13. Feb 13 16:06:42.319756 containerd[2047]: time="2025-02-13T16:06:42.318771880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 508.762994ms" Feb 13 16:06:42.319756 containerd[2047]: time="2025-02-13T16:06:42.318860848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 16:06:42.324466 containerd[2047]: time="2025-02-13T16:06:42.323469436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 16:06:42.328170 containerd[2047]: time="2025-02-13T16:06:42.327934744Z" level=info msg="CreateContainer within sandbox \"07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 16:06:42.357498 containerd[2047]: time="2025-02-13T16:06:42.357409265Z" level=info msg="CreateContainer within sandbox \"07a1831ab94de1b825f05105de3561ca2640bf90153edf2a6833bc0b4621ba7e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"534ee8abbc07576723734a96f2485180e3ee716b19eea61d731e1c4d5d03b1e0\"" Feb 13 16:06:42.360725 containerd[2047]: time="2025-02-13T16:06:42.360437393Z" level=info msg="StartContainer for \"534ee8abbc07576723734a96f2485180e3ee716b19eea61d731e1c4d5d03b1e0\"" Feb 13 16:06:42.536849 containerd[2047]: time="2025-02-13T16:06:42.536741297Z" level=info msg="StartContainer for \"534ee8abbc07576723734a96f2485180e3ee716b19eea61d731e1c4d5d03b1e0\" returns successfully" Feb 13 16:06:43.349466 kubelet[3561]: I0213 16:06:43.349265 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:43.380483 kubelet[3561]: I0213 16:06:43.380374 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b77f6fc57-kzwvn" podStartSLOduration=37.581268336 podStartE2EDuration="45.380311554s" podCreationTimestamp="2025-02-13 16:05:58 +0000 UTC" firstStartedPulling="2025-02-13 16:06:34.52179721 +0000 UTC m=+57.323409382" lastFinishedPulling="2025-02-13 16:06:42.32084044 +0000 UTC m=+65.122452600" observedRunningTime="2025-02-13 16:06:43.379003974 +0000 UTC m=+66.180616158" watchObservedRunningTime="2025-02-13 16:06:43.380311554 +0000 UTC m=+66.181923750" Feb 13 16:06:43.382529 kubelet[3561]: I0213 16:06:43.382465 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b77f6fc57-mmd5c" podStartSLOduration=37.87022074 podStartE2EDuration="45.382344174s" podCreationTimestamp="2025-02-13 16:05:58 +0000 UTC" firstStartedPulling="2025-02-13 16:06:34.295049816 +0000 UTC m=+57.096661988" lastFinishedPulling="2025-02-13 16:06:41.80717325 +0000 UTC m=+64.608785422" observedRunningTime="2025-02-13 16:06:42.376487813 +0000 UTC m=+65.178100009" watchObservedRunningTime="2025-02-13 16:06:43.382344174 +0000 UTC m=+66.183956334" Feb 13 16:06:44.354918 kubelet[3561]: I0213 16:06:44.354860 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:44.486548 containerd[2047]: time="2025-02-13T16:06:44.486449167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:44.492150 containerd[2047]: time="2025-02-13T16:06:44.492072211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 16:06:44.497401 containerd[2047]: time="2025-02-13T16:06:44.494572339Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:44.510487 containerd[2047]: time="2025-02-13T16:06:44.509567119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:44.512106 containerd[2047]: time="2025-02-13T16:06:44.510370159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 2.186832419s" Feb 13 16:06:44.512106 containerd[2047]: time="2025-02-13T16:06:44.511961863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 16:06:44.524870 containerd[2047]: time="2025-02-13T16:06:44.524596807Z" level=info msg="CreateContainer within sandbox \"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 16:06:44.606388 containerd[2047]: time="2025-02-13T16:06:44.604903760Z" level=info msg="CreateContainer within sandbox \"461159e7c3001105b8d2ef79e5b432d87cce1e6aa925aefaccf0dea5798eb90e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"07db7b3aabc269385d8578087afbdd2692933ebcf7fe7e7599a00596eb27377e\"" Feb 13 16:06:44.627294 containerd[2047]: time="2025-02-13T16:06:44.625215656Z" level=info msg="StartContainer for \"07db7b3aabc269385d8578087afbdd2692933ebcf7fe7e7599a00596eb27377e\"" Feb 13 16:06:44.625832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291205974.mount: Deactivated successfully. Feb 13 16:06:44.931995 containerd[2047]: time="2025-02-13T16:06:44.931599465Z" level=info msg="StartContainer for \"07db7b3aabc269385d8578087afbdd2692933ebcf7fe7e7599a00596eb27377e\" returns successfully" Feb 13 16:06:45.407498 kubelet[3561]: I0213 16:06:45.407093 3561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hd4qw" podStartSLOduration=34.929214672 podStartE2EDuration="45.406839716s" podCreationTimestamp="2025-02-13 16:06:00 +0000 UTC" firstStartedPulling="2025-02-13 16:06:34.036019903 +0000 UTC m=+56.837632087" lastFinishedPulling="2025-02-13 16:06:44.513644959 +0000 UTC m=+67.315257131" observedRunningTime="2025-02-13 16:06:45.403262396 +0000 UTC m=+68.204874592" watchObservedRunningTime="2025-02-13 16:06:45.406839716 +0000 UTC m=+68.208451900" Feb 13 16:06:45.774092 kubelet[3561]: I0213 16:06:45.774031 3561 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 16:06:45.774570 kubelet[3561]: I0213 16:06:45.774105 3561 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 16:06:47.314401 systemd[1]: Started sshd@13-172.31.19.49:22-139.178.68.195:59232.service - OpenSSH per-connection server daemon (139.178.68.195:59232). Feb 13 16:06:47.504338 sshd[6282]: Accepted publickey for core from 139.178.68.195 port 59232 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:47.507612 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:47.522460 systemd-logind[2020]: New session 14 of user core. Feb 13 16:06:47.529292 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:06:47.802242 sshd[6282]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:47.808901 systemd[1]: sshd@13-172.31.19.49:22-139.178.68.195:59232.service: Deactivated successfully. Feb 13 16:06:47.821099 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:06:47.829338 systemd-logind[2020]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:06:47.837681 systemd-logind[2020]: Removed session 14. Feb 13 16:06:52.836761 systemd[1]: Started sshd@14-172.31.19.49:22-139.178.68.195:59248.service - OpenSSH per-connection server daemon (139.178.68.195:59248). Feb 13 16:06:53.026143 sshd[6304]: Accepted publickey for core from 139.178.68.195 port 59248 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:53.031050 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:53.045252 systemd-logind[2020]: New session 15 of user core. Feb 13 16:06:53.056593 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:06:53.335842 sshd[6304]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:53.343378 systemd[1]: sshd@14-172.31.19.49:22-139.178.68.195:59248.service: Deactivated successfully. Feb 13 16:06:53.352806 systemd-logind[2020]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:06:53.354048 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:06:53.359038 systemd-logind[2020]: Removed session 15. Feb 13 16:06:53.384983 kubelet[3561]: I0213 16:06:53.383842 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:57.399477 kubelet[3561]: I0213 16:06:57.398493 3561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:58.373294 systemd[1]: Started sshd@15-172.31.19.49:22-139.178.68.195:40456.service - OpenSSH per-connection server daemon (139.178.68.195:40456). Feb 13 16:06:58.616347 sshd[6324]: Accepted publickey for core from 139.178.68.195 port 40456 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:58.626882 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:58.661941 systemd-logind[2020]: New session 16 of user core. Feb 13 16:06:58.673192 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:06:59.016906 sshd[6324]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:59.040684 systemd[1]: sshd@15-172.31.19.49:22-139.178.68.195:40456.service: Deactivated successfully. Feb 13 16:06:59.049970 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:06:59.056574 systemd-logind[2020]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:06:59.062198 systemd-logind[2020]: Removed session 16. Feb 13 16:07:04.049035 systemd[1]: Started sshd@16-172.31.19.49:22-139.178.68.195:40468.service - OpenSSH per-connection server daemon (139.178.68.195:40468). Feb 13 16:07:04.245517 sshd[6338]: Accepted publickey for core from 139.178.68.195 port 40468 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:04.248844 sshd[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:04.263973 systemd-logind[2020]: New session 17 of user core. Feb 13 16:07:04.274987 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:07:04.572059 sshd[6338]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:04.589478 systemd[1]: sshd@16-172.31.19.49:22-139.178.68.195:40468.service: Deactivated successfully. Feb 13 16:07:04.597864 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:07:04.611456 systemd-logind[2020]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:07:04.622924 systemd[1]: Started sshd@17-172.31.19.49:22-139.178.68.195:40472.service - OpenSSH per-connection server daemon (139.178.68.195:40472). Feb 13 16:07:04.628499 systemd-logind[2020]: Removed session 17. Feb 13 16:07:04.816145 sshd[6352]: Accepted publickey for core from 139.178.68.195 port 40472 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:04.819321 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:04.830634 systemd-logind[2020]: New session 18 of user core. Feb 13 16:07:04.840835 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:07:05.551308 sshd[6352]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:05.561198 systemd[1]: sshd@17-172.31.19.49:22-139.178.68.195:40472.service: Deactivated successfully. Feb 13 16:07:05.569811 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:07:05.572243 systemd-logind[2020]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:07:05.585163 systemd[1]: Started sshd@18-172.31.19.49:22-139.178.68.195:40488.service - OpenSSH per-connection server daemon (139.178.68.195:40488). Feb 13 16:07:05.587467 systemd-logind[2020]: Removed session 18. Feb 13 16:07:05.780248 sshd[6364]: Accepted publickey for core from 139.178.68.195 port 40488 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:05.785094 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:05.801999 systemd-logind[2020]: New session 19 of user core. Feb 13 16:07:05.810504 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:07:09.766760 sshd[6364]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:09.786765 systemd[1]: sshd@18-172.31.19.49:22-139.178.68.195:40488.service: Deactivated successfully. Feb 13 16:07:09.798760 systemd-logind[2020]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:07:09.816130 systemd[1]: Started sshd@19-172.31.19.49:22-139.178.68.195:56448.service - OpenSSH per-connection server daemon (139.178.68.195:56448). Feb 13 16:07:09.818856 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:07:09.827010 systemd-logind[2020]: Removed session 19. Feb 13 16:07:10.022811 sshd[6403]: Accepted publickey for core from 139.178.68.195 port 56448 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:10.025949 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:10.039199 systemd-logind[2020]: New session 20 of user core. Feb 13 16:07:10.046861 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:07:10.780121 sshd[6403]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:10.789937 systemd[1]: sshd@19-172.31.19.49:22-139.178.68.195:56448.service: Deactivated successfully. Feb 13 16:07:10.790600 systemd-logind[2020]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:07:10.804304 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:07:10.827590 systemd[1]: Started sshd@20-172.31.19.49:22-139.178.68.195:56450.service - OpenSSH per-connection server daemon (139.178.68.195:56450). Feb 13 16:07:10.830005 systemd-logind[2020]: Removed session 20. Feb 13 16:07:11.032440 sshd[6417]: Accepted publickey for core from 139.178.68.195 port 56450 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:11.035780 sshd[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:11.047874 systemd-logind[2020]: New session 21 of user core. Feb 13 16:07:11.054118 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:07:11.338806 sshd[6417]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:11.344642 systemd[1]: sshd@20-172.31.19.49:22-139.178.68.195:56450.service: Deactivated successfully. Feb 13 16:07:11.352857 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:07:11.354618 systemd-logind[2020]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:07:11.360008 systemd-logind[2020]: Removed session 21. Feb 13 16:07:16.369041 systemd[1]: Started sshd@21-172.31.19.49:22-139.178.68.195:56452.service - OpenSSH per-connection server daemon (139.178.68.195:56452). Feb 13 16:07:16.558596 sshd[6458]: Accepted publickey for core from 139.178.68.195 port 56452 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:16.561373 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:16.570995 systemd-logind[2020]: New session 22 of user core. Feb 13 16:07:16.579119 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:07:16.838880 sshd[6458]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:16.849138 systemd[1]: sshd@21-172.31.19.49:22-139.178.68.195:56452.service: Deactivated successfully. Feb 13 16:07:16.856784 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:07:16.858884 systemd-logind[2020]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:07:16.862550 systemd-logind[2020]: Removed session 22. Feb 13 16:07:21.871035 systemd[1]: Started sshd@22-172.31.19.49:22-139.178.68.195:40314.service - OpenSSH per-connection server daemon (139.178.68.195:40314). Feb 13 16:07:22.056795 sshd[6478]: Accepted publickey for core from 139.178.68.195 port 40314 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:22.059993 sshd[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:22.071268 systemd-logind[2020]: New session 23 of user core. Feb 13 16:07:22.076974 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:07:22.322669 sshd[6478]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:22.328567 systemd-logind[2020]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:07:22.330345 systemd[1]: sshd@22-172.31.19.49:22-139.178.68.195:40314.service: Deactivated successfully. Feb 13 16:07:22.338654 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:07:22.343159 systemd-logind[2020]: Removed session 23. Feb 13 16:07:27.357917 systemd[1]: Started sshd@23-172.31.19.49:22-139.178.68.195:47008.service - OpenSSH per-connection server daemon (139.178.68.195:47008). Feb 13 16:07:27.538779 sshd[6512]: Accepted publickey for core from 139.178.68.195 port 47008 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:27.541780 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:27.554224 systemd-logind[2020]: New session 24 of user core. Feb 13 16:07:27.559288 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:07:27.823829 sshd[6512]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:27.832244 systemd[1]: sshd@23-172.31.19.49:22-139.178.68.195:47008.service: Deactivated successfully. Feb 13 16:07:27.844141 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:07:27.846388 systemd-logind[2020]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:07:27.849143 systemd-logind[2020]: Removed session 24. Feb 13 16:07:32.858064 systemd[1]: Started sshd@24-172.31.19.49:22-139.178.68.195:47024.service - OpenSSH per-connection server daemon (139.178.68.195:47024). Feb 13 16:07:33.045821 sshd[6527]: Accepted publickey for core from 139.178.68.195 port 47024 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:33.048532 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:33.057072 systemd-logind[2020]: New session 25 of user core. Feb 13 16:07:33.062906 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:07:33.333080 sshd[6527]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:33.339071 systemd[1]: sshd@24-172.31.19.49:22-139.178.68.195:47024.service: Deactivated successfully. Feb 13 16:07:33.339358 systemd-logind[2020]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:07:33.348158 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:07:33.351881 systemd-logind[2020]: Removed session 25. Feb 13 16:07:38.366135 systemd[1]: Started sshd@25-172.31.19.49:22-139.178.68.195:45868.service - OpenSSH per-connection server daemon (139.178.68.195:45868). Feb 13 16:07:38.557152 sshd[6542]: Accepted publickey for core from 139.178.68.195 port 45868 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:38.559867 sshd[6542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:38.569551 systemd-logind[2020]: New session 26 of user core. Feb 13 16:07:38.573504 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:07:38.840740 sshd[6542]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:38.850535 systemd[1]: sshd@25-172.31.19.49:22-139.178.68.195:45868.service: Deactivated successfully. Feb 13 16:07:38.861516 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:07:38.865139 systemd-logind[2020]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:07:38.868004 systemd-logind[2020]: Removed session 26. Feb 13 16:07:39.100095 systemd[1]: run-containerd-runc-k8s.io-54189c6c82c8f9232311c9cabc30f2ebe43dd766c490769a4f0147e8b3246e61-runc.zzKur2.mount: Deactivated successfully. Feb 13 16:07:43.872367 systemd[1]: Started sshd@26-172.31.19.49:22-139.178.68.195:45878.service - OpenSSH per-connection server daemon (139.178.68.195:45878). Feb 13 16:07:44.064618 sshd[6578]: Accepted publickey for core from 139.178.68.195 port 45878 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:44.066741 sshd[6578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:44.075081 systemd-logind[2020]: New session 27 of user core. Feb 13 16:07:44.080937 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 16:07:44.347358 sshd[6578]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:44.359136 systemd[1]: sshd@26-172.31.19.49:22-139.178.68.195:45878.service: Deactivated successfully. Feb 13 16:07:44.369148 systemd-logind[2020]: Session 27 logged out. Waiting for processes to exit. Feb 13 16:07:44.370092 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 16:07:44.374056 systemd-logind[2020]: Removed session 27. Feb 13 16:07:49.381964 systemd[1]: Started sshd@27-172.31.19.49:22-139.178.68.195:38522.service - OpenSSH per-connection server daemon (139.178.68.195:38522). Feb 13 16:07:49.574554 sshd[6611]: Accepted publickey for core from 139.178.68.195 port 38522 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:49.577452 sshd[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:49.590200 systemd-logind[2020]: New session 28 of user core. Feb 13 16:07:49.596019 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 16:07:49.862811 sshd[6611]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:49.870209 systemd[1]: sshd@27-172.31.19.49:22-139.178.68.195:38522.service: Deactivated successfully. Feb 13 16:07:49.880710 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 16:07:49.883228 systemd-logind[2020]: Session 28 logged out. Waiting for processes to exit. Feb 13 16:07:49.886405 systemd-logind[2020]: Removed session 28. Feb 13 16:08:03.255641 containerd[2047]: time="2025-02-13T16:08:03.255344758Z" level=info msg="shim disconnected" id=4f4a9e18fa996254c164b078d13cfa4e4615e0c86e9c004ee4fefd74ea68472f namespace=k8s.io Feb 13 16:08:03.255641 containerd[2047]: time="2025-02-13T16:08:03.255529450Z" level=warning msg="cleaning up after shim disconnected" id=4f4a9e18fa996254c164b078d13cfa4e4615e0c86e9c004ee4fefd74ea68472f namespace=k8s.io Feb 13 16:08:03.255641 containerd[2047]: time="2025-02-13T16:08:03.255556222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:03.259179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f4a9e18fa996254c164b078d13cfa4e4615e0c86e9c004ee4fefd74ea68472f-rootfs.mount: Deactivated successfully. Feb 13 16:08:03.663081 kubelet[3561]: I0213 16:08:03.662923 3561 scope.go:117] "RemoveContainer" containerID="4f4a9e18fa996254c164b078d13cfa4e4615e0c86e9c004ee4fefd74ea68472f" Feb 13 16:08:03.668127 containerd[2047]: time="2025-02-13T16:08:03.668027604Z" level=info msg="CreateContainer within sandbox \"988100da5dd3181f854d766f832d73a3404b443ebfb7aec4305e5735a5c8c170\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 13 16:08:03.697171 containerd[2047]: time="2025-02-13T16:08:03.697094689Z" level=info msg="CreateContainer within sandbox \"988100da5dd3181f854d766f832d73a3404b443ebfb7aec4305e5735a5c8c170\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"12534472fa696101c5b1f65e56ef82c262a35b464275f7f2b4057a2fa759b979\"" Feb 13 16:08:03.698636 containerd[2047]: time="2025-02-13T16:08:03.698572597Z" level=info msg="StartContainer for \"12534472fa696101c5b1f65e56ef82c262a35b464275f7f2b4057a2fa759b979\"" Feb 13 16:08:03.700514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57657152.mount: Deactivated successfully. Feb 13 16:08:03.805765 containerd[2047]: time="2025-02-13T16:08:03.805649401Z" level=info msg="StartContainer for \"12534472fa696101c5b1f65e56ef82c262a35b464275f7f2b4057a2fa759b979\" returns successfully" Feb 13 16:08:04.060137 containerd[2047]: time="2025-02-13T16:08:04.060059866Z" level=info msg="shim disconnected" id=5132bc7a51ecb85f198874227a6911e82981d6bf31c94d8eb5269819a8856517 namespace=k8s.io Feb 13 16:08:04.060942 containerd[2047]: time="2025-02-13T16:08:04.060588370Z" level=warning msg="cleaning up after shim disconnected" id=5132bc7a51ecb85f198874227a6911e82981d6bf31c94d8eb5269819a8856517 namespace=k8s.io Feb 13 16:08:04.060942 containerd[2047]: time="2025-02-13T16:08:04.060653878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:04.254975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5132bc7a51ecb85f198874227a6911e82981d6bf31c94d8eb5269819a8856517-rootfs.mount: Deactivated successfully. Feb 13 16:08:04.668823 kubelet[3561]: I0213 16:08:04.668775 3561 scope.go:117] "RemoveContainer" containerID="5132bc7a51ecb85f198874227a6911e82981d6bf31c94d8eb5269819a8856517" Feb 13 16:08:04.674777 containerd[2047]: time="2025-02-13T16:08:04.674611489Z" level=info msg="CreateContainer within sandbox \"ce45e6c2fcb38e3c800a4ee1065816fe553c5aa12814c2373483fbae67121947\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 16:08:04.707590 containerd[2047]: time="2025-02-13T16:08:04.706183574Z" level=info msg="CreateContainer within sandbox \"ce45e6c2fcb38e3c800a4ee1065816fe553c5aa12814c2373483fbae67121947\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a6c627052645d3b2bbf652a1dd8d03ff98e9c6876e77672878417af8af19779e\"" Feb 13 16:08:04.708598 containerd[2047]: time="2025-02-13T16:08:04.708515018Z" level=info msg="StartContainer for \"a6c627052645d3b2bbf652a1dd8d03ff98e9c6876e77672878417af8af19779e\"" Feb 13 16:08:04.864206 containerd[2047]: time="2025-02-13T16:08:04.863775338Z" level=info msg="StartContainer for \"a6c627052645d3b2bbf652a1dd8d03ff98e9c6876e77672878417af8af19779e\" returns successfully" Feb 13 16:08:08.193041 containerd[2047]: time="2025-02-13T16:08:08.192890427Z" level=info msg="shim disconnected" id=584d9e592ea4722caf947d0201ca74e69b9a6bb6cc1d9f94573fd47d20262458 namespace=k8s.io Feb 13 16:08:08.193041 containerd[2047]: time="2025-02-13T16:08:08.192987687Z" level=warning msg="cleaning up after shim disconnected" id=584d9e592ea4722caf947d0201ca74e69b9a6bb6cc1d9f94573fd47d20262458 namespace=k8s.io Feb 13 16:08:08.194309 containerd[2047]: time="2025-02-13T16:08:08.193008879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:08.208714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-584d9e592ea4722caf947d0201ca74e69b9a6bb6cc1d9f94573fd47d20262458-rootfs.mount: Deactivated successfully. Feb 13 16:08:08.697280 kubelet[3561]: I0213 16:08:08.696781 3561 scope.go:117] "RemoveContainer" containerID="584d9e592ea4722caf947d0201ca74e69b9a6bb6cc1d9f94573fd47d20262458" Feb 13 16:08:08.701987 containerd[2047]: time="2025-02-13T16:08:08.701855681Z" level=info msg="CreateContainer within sandbox \"606b6bd0a601a28bad35e1fdb78db28e839628ed201f54df1c004cd47783f45d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 16:08:08.724897 containerd[2047]: time="2025-02-13T16:08:08.724526034Z" level=info msg="CreateContainer within sandbox \"606b6bd0a601a28bad35e1fdb78db28e839628ed201f54df1c004cd47783f45d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"64ad7f9569a080f11b51d4209b7940cea37f6996f38489821843f7a9595fb34e\"" Feb 13 16:08:08.727517 containerd[2047]: time="2025-02-13T16:08:08.727393050Z" level=info msg="StartContainer for \"64ad7f9569a080f11b51d4209b7940cea37f6996f38489821843f7a9595fb34e\"" Feb 13 16:08:08.884214 containerd[2047]: time="2025-02-13T16:08:08.882842742Z" level=info msg="StartContainer for \"64ad7f9569a080f11b51d4209b7940cea37f6996f38489821843f7a9595fb34e\" returns successfully" Feb 13 16:08:10.758900 kubelet[3561]: E0213 16:08:10.758841 3561 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-49?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"