Apr 13 19:23:13.234462 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 13 19:23:13.234508 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:23:13.234533 kernel: KASLR disabled due to lack of seed Apr 13 19:23:13.234551 kernel: efi: EFI v2.7 by EDK II Apr 13 19:23:13.234567 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 13 19:23:13.234584 kernel: ACPI: Early table checksum verification disabled Apr 13 19:23:13.234602 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 13 19:23:13.234618 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 19:23:13.234635 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 19:23:13.234651 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 19:23:13.234672 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 19:23:13.234688 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 13 19:23:13.234704 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 13 19:23:13.234720 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 13 19:23:13.234740 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 19:23:13.234760 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 13 19:23:13.234779 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 13 19:23:13.234795 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 13 19:23:13.234812 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 13 19:23:13.234829 kernel: printk: bootconsole [uart0] enabled Apr 13 19:23:13.234846 kernel: NUMA: Failed to initialise from firmware Apr 13 19:23:13.234863 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:13.234880 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 13 19:23:13.234897 kernel: Zone ranges: Apr 13 19:23:13.234914 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:23:13.234930 kernel: DMA32 empty Apr 13 19:23:13.234951 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 13 19:23:13.234968 kernel: Movable zone start for each node Apr 13 19:23:13.235018 kernel: Early memory node ranges Apr 13 19:23:13.235039 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 13 19:23:13.235057 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 13 19:23:13.235074 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 13 19:23:13.235092 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 13 19:23:13.235109 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 13 19:23:13.235126 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 13 19:23:13.235143 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 13 19:23:13.235159 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 13 19:23:13.235176 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:13.235199 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 13 19:23:13.235217 kernel: psci: probing for conduit method from ACPI. Apr 13 19:23:13.235241 kernel: psci: PSCIv1.0 detected in firmware. Apr 13 19:23:13.235259 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:23:13.235277 kernel: psci: Trusted OS migration not required Apr 13 19:23:13.235299 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:23:13.235318 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 13 19:23:13.235337 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:23:13.235355 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:23:13.235373 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:23:13.235392 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:23:13.235423 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:23:13.235447 kernel: CPU features: detected: Spectre-v2 Apr 13 19:23:13.235465 kernel: CPU features: detected: Spectre-v3a Apr 13 19:23:13.235483 kernel: CPU features: detected: Spectre-BHB Apr 13 19:23:13.235500 kernel: CPU features: detected: ARM erratum 1742098 Apr 13 19:23:13.235524 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 13 19:23:13.235542 kernel: alternatives: applying boot alternatives Apr 13 19:23:13.235562 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:13.235581 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:23:13.235599 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:23:13.235617 kernel: Fallback order for Node 0: 0 Apr 13 19:23:13.235635 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 13 19:23:13.235653 kernel: Policy zone: Normal Apr 13 19:23:13.235670 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:23:13.235688 kernel: software IO TLB: area num 2. Apr 13 19:23:13.235706 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 13 19:23:13.235730 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 13 19:23:13.235748 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:23:13.235766 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:23:13.235785 kernel: rcu: RCU event tracing is enabled. Apr 13 19:23:13.235803 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:23:13.235822 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:23:13.235840 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:23:13.235858 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:23:13.235876 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:23:13.235894 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:23:13.235912 kernel: GICv3: 96 SPIs implemented Apr 13 19:23:13.235933 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:23:13.235951 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:23:13.235969 kernel: GICv3: GICv3 features: 16 PPIs Apr 13 19:23:13.239476 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 13 19:23:13.239504 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 13 19:23:13.239522 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:23:13.239542 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:23:13.239560 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 13 19:23:13.239579 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 13 19:23:13.239597 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 13 19:23:13.239615 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:23:13.239633 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 13 19:23:13.239659 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 13 19:23:13.239677 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 13 19:23:13.239696 kernel: Console: colour dummy device 80x25 Apr 13 19:23:13.239715 kernel: printk: console [tty1] enabled Apr 13 19:23:13.239733 kernel: ACPI: Core revision 20230628 Apr 13 19:23:13.239752 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 13 19:23:13.239770 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:23:13.239788 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:23:13.239806 kernel: landlock: Up and running. Apr 13 19:23:13.239829 kernel: SELinux: Initializing. Apr 13 19:23:13.239848 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:13.239866 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:13.239906 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:13.239928 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:13.239946 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:23:13.239966 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:23:13.240023 kernel: Platform MSI: ITS@0x10080000 domain created Apr 13 19:23:13.240049 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 13 19:23:13.240075 kernel: Remapping and enabling EFI services. Apr 13 19:23:13.240094 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:23:13.240113 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:23:13.240131 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 13 19:23:13.240149 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 13 19:23:13.240168 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 13 19:23:13.240186 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:23:13.240204 kernel: SMP: Total of 2 processors activated. Apr 13 19:23:13.240222 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:23:13.240244 kernel: CPU features: detected: 32-bit EL1 Support Apr 13 19:23:13.240262 kernel: CPU features: detected: CRC32 instructions Apr 13 19:23:13.240281 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:23:13.240331 kernel: alternatives: applying system-wide alternatives Apr 13 19:23:13.240356 kernel: devtmpfs: initialized Apr 13 19:23:13.240392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:23:13.240414 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:23:13.240433 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:23:13.240452 kernel: SMBIOS 3.0.0 present. Apr 13 19:23:13.240478 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 13 19:23:13.240498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:23:13.240517 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:23:13.240537 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:23:13.240556 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:23:13.240575 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:23:13.240595 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Apr 13 19:23:13.240613 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:23:13.240638 kernel: cpuidle: using governor menu Apr 13 19:23:13.240657 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:23:13.240676 kernel: ASID allocator initialised with 65536 entries Apr 13 19:23:13.240695 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:23:13.240714 kernel: Serial: AMBA PL011 UART driver Apr 13 19:23:13.240733 kernel: Modules: 17488 pages in range for non-PLT usage Apr 13 19:23:13.240752 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:23:13.240771 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:23:13.240790 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:23:13.240813 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:23:13.240832 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:23:13.240852 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:23:13.240871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:23:13.240890 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:23:13.240909 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:23:13.240928 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:23:13.240947 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:23:13.240966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:23:13.245054 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:23:13.245092 kernel: ACPI: Interpreter enabled Apr 13 19:23:13.245113 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:23:13.245135 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:23:13.245155 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 13 19:23:13.245518 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:23:13.245788 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:23:13.246070 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:23:13.246324 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 13 19:23:13.246545 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 13 19:23:13.246573 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 13 19:23:13.246593 kernel: acpiphp: Slot [1] registered Apr 13 19:23:13.246613 kernel: acpiphp: Slot [2] registered Apr 13 19:23:13.246632 kernel: acpiphp: Slot [3] registered Apr 13 19:23:13.246651 kernel: acpiphp: Slot [4] registered Apr 13 19:23:13.246670 kernel: acpiphp: Slot [5] registered Apr 13 19:23:13.246697 kernel: acpiphp: Slot [6] registered Apr 13 19:23:13.246717 kernel: acpiphp: Slot [7] registered Apr 13 19:23:13.246736 kernel: acpiphp: Slot [8] registered Apr 13 19:23:13.246755 kernel: acpiphp: Slot [9] registered Apr 13 19:23:13.246774 kernel: acpiphp: Slot [10] registered Apr 13 19:23:13.246793 kernel: acpiphp: Slot [11] registered Apr 13 19:23:13.246811 kernel: acpiphp: Slot [12] registered Apr 13 19:23:13.246830 kernel: acpiphp: Slot [13] registered Apr 13 19:23:13.246849 kernel: acpiphp: Slot [14] registered Apr 13 19:23:13.246868 kernel: acpiphp: Slot [15] registered Apr 13 19:23:13.246891 kernel: acpiphp: Slot [16] registered Apr 13 19:23:13.246910 kernel: acpiphp: Slot [17] registered Apr 13 19:23:13.246929 kernel: acpiphp: Slot [18] registered Apr 13 19:23:13.246948 kernel: acpiphp: Slot [19] registered Apr 13 19:23:13.246967 kernel: acpiphp: Slot [20] registered Apr 13 19:23:13.251094 kernel: acpiphp: Slot [21] registered Apr 13 19:23:13.251136 kernel: acpiphp: Slot [22] registered Apr 13 19:23:13.251157 kernel: acpiphp: Slot [23] registered Apr 13 19:23:13.251177 kernel: acpiphp: Slot [24] registered Apr 13 19:23:13.251207 kernel: acpiphp: Slot [25] registered Apr 13 19:23:13.251227 kernel: acpiphp: Slot [26] registered Apr 13 19:23:13.251246 kernel: acpiphp: Slot [27] registered Apr 13 19:23:13.251265 kernel: acpiphp: Slot [28] registered Apr 13 19:23:13.251284 kernel: acpiphp: Slot [29] registered Apr 13 19:23:13.251303 kernel: acpiphp: Slot [30] registered Apr 13 19:23:13.251322 kernel: acpiphp: Slot [31] registered Apr 13 19:23:13.251341 kernel: PCI host bridge to bus 0000:00 Apr 13 19:23:13.251645 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 13 19:23:13.251847 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:23:13.252110 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:13.252304 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 13 19:23:13.252562 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 13 19:23:13.252798 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 13 19:23:13.256383 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 13 19:23:13.256667 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 19:23:13.256896 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 13 19:23:13.257169 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:13.257408 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 19:23:13.257628 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 13 19:23:13.257850 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 13 19:23:13.258102 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 13 19:23:13.258328 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:13.258529 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 13 19:23:13.258721 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:23:13.258910 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:13.258936 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:23:13.258957 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:23:13.258976 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:23:13.259023 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:23:13.259051 kernel: iommu: Default domain type: Translated Apr 13 19:23:13.259071 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:23:13.259090 kernel: efivars: Registered efivars operations Apr 13 19:23:13.259109 kernel: vgaarb: loaded Apr 13 19:23:13.259128 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:23:13.259150 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:23:13.259169 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:23:13.259189 kernel: pnp: PnP ACPI init Apr 13 19:23:13.259452 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 13 19:23:13.259487 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:23:13.259507 kernel: NET: Registered PF_INET protocol family Apr 13 19:23:13.259526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:23:13.259546 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:23:13.259565 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:23:13.259584 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:23:13.259603 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:23:13.259622 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:23:13.259646 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:13.259665 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:13.259684 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:23:13.259703 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:23:13.259722 kernel: kvm [1]: HYP mode not available Apr 13 19:23:13.259741 kernel: Initialise system trusted keyrings Apr 13 19:23:13.259760 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:23:13.259779 kernel: Key type asymmetric registered Apr 13 19:23:13.259797 kernel: Asymmetric key parser 'x509' registered Apr 13 19:23:13.259820 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:23:13.259840 kernel: io scheduler mq-deadline registered Apr 13 19:23:13.259858 kernel: io scheduler kyber registered Apr 13 19:23:13.259877 kernel: io scheduler bfq registered Apr 13 19:23:13.260127 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 13 19:23:13.260158 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:23:13.260178 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:23:13.260197 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 13 19:23:13.260216 kernel: ACPI: button: Sleep Button [SLPB] Apr 13 19:23:13.260242 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:23:13.260262 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:23:13.260478 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 13 19:23:13.260506 kernel: printk: console [ttyS0] disabled Apr 13 19:23:13.260525 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 13 19:23:13.260545 kernel: printk: console [ttyS0] enabled Apr 13 19:23:13.260564 kernel: printk: bootconsole [uart0] disabled Apr 13 19:23:13.260583 kernel: thunder_xcv, ver 1.0 Apr 13 19:23:13.260602 kernel: thunder_bgx, ver 1.0 Apr 13 19:23:13.260627 kernel: nicpf, ver 1.0 Apr 13 19:23:13.260646 kernel: nicvf, ver 1.0 Apr 13 19:23:13.260912 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:23:13.261207 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:23:12 UTC (1776108192) Apr 13 19:23:13.261623 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:23:13.261644 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 13 19:23:13.261664 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:23:13.261683 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:23:13.261712 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:23:13.261731 kernel: Segment Routing with IPv6 Apr 13 19:23:13.261750 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:23:13.261768 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:23:13.261788 kernel: Key type dns_resolver registered Apr 13 19:23:13.261807 kernel: registered taskstats version 1 Apr 13 19:23:13.261826 kernel: Loading compiled-in X.509 certificates Apr 13 19:23:13.261845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:23:13.261865 kernel: Key type .fscrypt registered Apr 13 19:23:13.261888 kernel: Key type fscrypt-provisioning registered Apr 13 19:23:13.261907 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:23:13.261926 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:23:13.261946 kernel: ima: No architecture policies found Apr 13 19:23:13.261965 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:23:13.264051 kernel: clk: Disabling unused clocks Apr 13 19:23:13.264095 kernel: Freeing unused kernel memory: 39424K Apr 13 19:23:13.264115 kernel: Run /init as init process Apr 13 19:23:13.264134 kernel: with arguments: Apr 13 19:23:13.264162 kernel: /init Apr 13 19:23:13.264182 kernel: with environment: Apr 13 19:23:13.264202 kernel: HOME=/ Apr 13 19:23:13.264221 kernel: TERM=linux Apr 13 19:23:13.264246 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:13.264272 systemd[1]: Detected virtualization amazon. Apr 13 19:23:13.264295 systemd[1]: Detected architecture arm64. Apr 13 19:23:13.264315 systemd[1]: Running in initrd. Apr 13 19:23:13.264341 systemd[1]: No hostname configured, using default hostname. Apr 13 19:23:13.264362 systemd[1]: Hostname set to . Apr 13 19:23:13.264384 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:13.264405 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:23:13.264427 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:13.264449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:13.264472 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:23:13.264493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:13.264519 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:23:13.264541 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:23:13.264564 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:23:13.264585 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:23:13.264606 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:13.264627 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:13.264652 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:13.264674 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:13.264695 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:13.264717 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:13.264738 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:13.264760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:13.264781 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:23:13.264801 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:23:13.264822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:13.264847 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:13.264868 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:13.264889 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:13.264910 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:23:13.264931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:13.264952 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:23:13.264973 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:23:13.265028 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:13.265053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:13.265080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:13.265101 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:13.265164 systemd-journald[251]: Collecting audit messages is disabled. Apr 13 19:23:13.265212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:13.265239 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:23:13.265262 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:23:13.265283 systemd-journald[251]: Journal started Apr 13 19:23:13.265342 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2d8105dac1d1f2c6c9d17dff22f4d8) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:13.225843 systemd-modules-load[252]: Inserted module 'overlay' Apr 13 19:23:13.278024 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:13.283738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:13.291784 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:23:13.291824 kernel: Bridge firewalling registered Apr 13 19:23:13.292749 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 13 19:23:13.298331 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:23:13.299077 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:13.317338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:13.325087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:13.334300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:13.363599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:13.384235 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:13.389126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:13.406796 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:13.412937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:13.422226 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:23:13.437342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:13.470226 dracut-cmdline[288]: dracut-dracut-053 Apr 13 19:23:13.480222 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:13.523886 systemd-resolved[290]: Positive Trust Anchors: Apr 13 19:23:13.523922 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:13.524019 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:13.637015 kernel: SCSI subsystem initialized Apr 13 19:23:13.643023 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:23:13.656025 kernel: iscsi: registered transport (tcp) Apr 13 19:23:13.679373 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:23:13.679461 kernel: QLogic iSCSI HBA Driver Apr 13 19:23:13.759060 kernel: random: crng init done Apr 13 19:23:13.759334 systemd-resolved[290]: Defaulting to hostname 'linux'. Apr 13 19:23:13.769653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:13.773933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:13.796102 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:13.805512 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:23:13.845504 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:23:13.845599 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:23:13.847555 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:23:13.916040 kernel: raid6: neonx8 gen() 6621 MB/s Apr 13 19:23:13.933024 kernel: raid6: neonx4 gen() 6430 MB/s Apr 13 19:23:13.950023 kernel: raid6: neonx2 gen() 5371 MB/s Apr 13 19:23:13.967025 kernel: raid6: neonx1 gen() 3918 MB/s Apr 13 19:23:13.984026 kernel: raid6: int64x8 gen() 3791 MB/s Apr 13 19:23:14.002025 kernel: raid6: int64x4 gen() 3670 MB/s Apr 13 19:23:14.019027 kernel: raid6: int64x2 gen() 3527 MB/s Apr 13 19:23:14.037135 kernel: raid6: int64x1 gen() 2759 MB/s Apr 13 19:23:14.037180 kernel: raid6: using algorithm neonx8 gen() 6621 MB/s Apr 13 19:23:14.056034 kernel: raid6: .... xor() 4910 MB/s, rmw enabled Apr 13 19:23:14.056095 kernel: raid6: using neon recovery algorithm Apr 13 19:23:14.065275 kernel: xor: measuring software checksum speed Apr 13 19:23:14.065343 kernel: 8regs : 10925 MB/sec Apr 13 19:23:14.066520 kernel: 32regs : 11919 MB/sec Apr 13 19:23:14.067893 kernel: arm64_neon : 9558 MB/sec Apr 13 19:23:14.067937 kernel: xor: using function: 32regs (11919 MB/sec) Apr 13 19:23:14.153033 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:23:14.173633 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:14.187322 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:14.232825 systemd-udevd[472]: Using default interface naming scheme 'v255'. Apr 13 19:23:14.241848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:14.269332 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:23:14.306385 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Apr 13 19:23:14.370541 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:14.390249 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:14.509823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:14.522276 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:23:14.577211 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:14.584400 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:14.591060 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:14.602091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:14.618456 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:23:14.662623 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:14.717241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:14.717927 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:14.722770 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:14.725437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:14.725721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:14.728459 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:14.749574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:14.760569 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:23:14.760611 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 13 19:23:14.765772 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 19:23:14.766307 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 19:23:14.781014 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:69:96:c8:b7:eb Apr 13 19:23:14.784333 (udev-worker)[524]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:14.789904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:14.803472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:14.819700 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:23:14.819772 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 19:23:14.832477 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 19:23:14.840033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:23:14.840127 kernel: GPT:9289727 != 33554431 Apr 13 19:23:14.840154 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:23:14.840180 kernel: GPT:9289727 != 33554431 Apr 13 19:23:14.840204 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:23:14.842214 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:14.852520 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:14.947359 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (524) Apr 13 19:23:14.947455 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (521) Apr 13 19:23:15.033877 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 19:23:15.068673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 19:23:15.085722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:15.088499 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:15.104198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:15.119292 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:23:15.134607 disk-uuid[665]: Primary Header is updated. Apr 13 19:23:15.134607 disk-uuid[665]: Secondary Entries is updated. Apr 13 19:23:15.134607 disk-uuid[665]: Secondary Header is updated. Apr 13 19:23:15.145051 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:15.152089 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:15.161029 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:16.164015 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:16.166100 disk-uuid[666]: The operation has completed successfully. Apr 13 19:23:16.360390 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:23:16.360597 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:23:16.402620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:23:16.413085 sh[1010]: Success Apr 13 19:23:16.433337 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:23:16.555938 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:23:16.570209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:23:16.584509 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:23:16.616972 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:23:16.617051 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:16.617079 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:23:16.619040 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:23:16.621626 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:23:16.756032 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:23:16.759349 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:23:16.759857 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:23:16.772367 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:23:16.781240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:23:16.798803 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:16.798864 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:16.800301 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:16.815031 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:16.833677 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:23:16.838039 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:16.847731 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:23:16.861440 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:23:16.971156 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:16.984400 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:17.046093 systemd-networkd[1202]: lo: Link UP Apr 13 19:23:17.046115 systemd-networkd[1202]: lo: Gained carrier Apr 13 19:23:17.051703 systemd-networkd[1202]: Enumeration completed Apr 13 19:23:17.051927 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:17.055896 systemd[1]: Reached target network.target - Network. Apr 13 19:23:17.056384 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:17.056392 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:17.064362 systemd-networkd[1202]: eth0: Link UP Apr 13 19:23:17.064370 systemd-networkd[1202]: eth0: Gained carrier Apr 13 19:23:17.064388 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:17.092085 systemd-networkd[1202]: eth0: DHCPv4 address 172.31.27.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:17.317563 ignition[1110]: Ignition 2.19.0 Apr 13 19:23:17.318054 ignition[1110]: Stage: fetch-offline Apr 13 19:23:17.319674 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:17.319702 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:17.326363 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:17.320465 ignition[1110]: Ignition finished successfully Apr 13 19:23:17.347303 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:23:17.372772 ignition[1211]: Ignition 2.19.0 Apr 13 19:23:17.372803 ignition[1211]: Stage: fetch Apr 13 19:23:17.374571 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:17.374598 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:17.374754 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:17.391569 ignition[1211]: PUT result: OK Apr 13 19:23:17.395243 ignition[1211]: parsed url from cmdline: "" Apr 13 19:23:17.395268 ignition[1211]: no config URL provided Apr 13 19:23:17.395287 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:23:17.395315 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:23:17.395348 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:17.399739 ignition[1211]: PUT result: OK Apr 13 19:23:17.399843 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 19:23:17.404570 ignition[1211]: GET result: OK Apr 13 19:23:17.412102 unknown[1211]: fetched base config from "system" Apr 13 19:23:17.404737 ignition[1211]: parsing config with SHA512: d44aa1c4a451af5001675e08574fb5a7069014c3de230fcb8472e56ce879e1cc5f7bc76c1b1b6fa1aec778c38e7b41401710d67a581da5985309e84cda2f26cf Apr 13 19:23:17.412118 unknown[1211]: fetched base config from "system" Apr 13 19:23:17.412744 ignition[1211]: fetch: fetch complete Apr 13 19:23:17.412133 unknown[1211]: fetched user config from "aws" Apr 13 19:23:17.412755 ignition[1211]: fetch: fetch passed Apr 13 19:23:17.418262 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:23:17.412830 ignition[1211]: Ignition finished successfully Apr 13 19:23:17.434873 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:23:17.476185 ignition[1217]: Ignition 2.19.0 Apr 13 19:23:17.476212 ignition[1217]: Stage: kargs Apr 13 19:23:17.477140 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:17.477166 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:17.477315 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:17.478765 ignition[1217]: PUT result: OK Apr 13 19:23:17.489705 ignition[1217]: kargs: kargs passed Apr 13 19:23:17.489800 ignition[1217]: Ignition finished successfully Apr 13 19:23:17.497065 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:23:17.507342 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:23:17.537714 ignition[1223]: Ignition 2.19.0 Apr 13 19:23:17.537737 ignition[1223]: Stage: disks Apr 13 19:23:17.539037 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:17.539063 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:17.539220 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:17.549283 ignition[1223]: PUT result: OK Apr 13 19:23:17.554692 ignition[1223]: disks: disks passed Apr 13 19:23:17.555014 ignition[1223]: Ignition finished successfully Apr 13 19:23:17.560472 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:23:17.565486 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:17.570456 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:23:17.573274 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:17.575812 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:17.583533 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:17.594306 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:23:17.646485 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 19:23:17.650388 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:23:17.663239 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:23:17.752038 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:23:17.753051 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:23:17.757575 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:17.774207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:17.787175 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:23:17.787910 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 19:23:17.788008 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:23:17.788061 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:17.826766 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1251) Apr 13 19:23:17.804928 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:23:17.834332 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:17.834383 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:17.834412 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:17.811261 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:23:17.849092 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:17.852449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:18.140844 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:23:18.151204 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:23:18.159590 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:23:18.168874 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:23:18.394745 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:18.410182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:23:18.418877 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:23:18.433436 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:23:18.439634 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:18.488451 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:23:18.498134 ignition[1364]: INFO : Ignition 2.19.0 Apr 13 19:23:18.498134 ignition[1364]: INFO : Stage: mount Apr 13 19:23:18.502191 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:18.502191 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:18.502191 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:18.509906 ignition[1364]: INFO : PUT result: OK Apr 13 19:23:18.514163 ignition[1364]: INFO : mount: mount passed Apr 13 19:23:18.522854 ignition[1364]: INFO : Ignition finished successfully Apr 13 19:23:18.519597 systemd-networkd[1202]: eth0: Gained IPv6LL Apr 13 19:23:18.520096 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:23:18.538741 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:23:18.762456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:18.795030 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1375) Apr 13 19:23:18.799943 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:18.800011 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:18.800040 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:18.808036 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:18.810371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:18.844218 ignition[1392]: INFO : Ignition 2.19.0 Apr 13 19:23:18.844218 ignition[1392]: INFO : Stage: files Apr 13 19:23:18.849796 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:18.849796 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:18.849796 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:18.849796 ignition[1392]: INFO : PUT result: OK Apr 13 19:23:18.862591 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:23:18.865327 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:23:18.865327 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:23:18.882625 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:23:18.885832 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:23:18.889559 unknown[1392]: wrote ssh authorized keys file for user: core Apr 13 19:23:18.892871 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:23:18.897185 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:18.897185 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:23:18.986149 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:23:19.140590 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:19.140590 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:19.140590 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:19.140590 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:19.158667 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:23:21.825171 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:23:22.240149 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:22.245267 ignition[1392]: INFO : files: files passed Apr 13 19:23:22.245267 ignition[1392]: INFO : Ignition finished successfully Apr 13 19:23:22.261544 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:23:22.294420 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:23:22.302170 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:23:22.312587 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:23:22.315217 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:23:22.347676 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:22.347676 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:22.355221 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:22.361457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:22.368501 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:23:22.379307 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:23:22.436476 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:23:22.436884 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:23:22.447398 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:23:22.449898 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:23:22.452409 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:23:22.464372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:23:22.508044 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:22.519447 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:23:22.549572 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:22.553530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:22.561609 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:23:22.565707 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:23:22.568171 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:22.571474 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:23:22.579500 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:23:22.581686 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:23:22.584552 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:22.589687 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:22.594483 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:23:22.599552 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:22.600018 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:23:22.600350 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:23:22.600702 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:23:22.601193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:23:22.601511 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:22.602564 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:22.603042 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:22.603274 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:23:22.616764 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:22.617139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:23:22.617445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:22.625177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:23:22.625529 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:22.631502 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:23:22.631801 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:23:22.658166 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:23:22.663108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:23:22.666319 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:22.695290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:23:22.697977 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:23:22.699231 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:22.709959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:23:22.710272 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:22.727362 ignition[1446]: INFO : Ignition 2.19.0 Apr 13 19:23:22.727362 ignition[1446]: INFO : Stage: umount Apr 13 19:23:22.727362 ignition[1446]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:22.727362 ignition[1446]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:22.727362 ignition[1446]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:22.747846 ignition[1446]: INFO : PUT result: OK Apr 13 19:23:22.747846 ignition[1446]: INFO : umount: umount passed Apr 13 19:23:22.747846 ignition[1446]: INFO : Ignition finished successfully Apr 13 19:23:22.756955 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:23:22.759125 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:23:22.766628 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:23:22.766842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:23:22.777450 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:23:22.777574 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:23:22.781882 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:23:22.781972 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:23:22.784585 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:23:22.784682 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:23:22.797105 systemd[1]: Stopped target network.target - Network. Apr 13 19:23:22.799135 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:23:22.799249 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:22.802158 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:23:22.804263 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:23:22.808504 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:22.811472 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:23:22.813538 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:23:22.815825 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:23:22.815910 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:22.818316 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:23:22.818407 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:22.821866 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:23:22.821972 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:23:22.826361 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:23:22.826452 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:22.831114 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:23:22.850336 systemd-networkd[1202]: eth0: DHCPv6 lease lost Apr 13 19:23:22.855421 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:23:22.862589 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:23:22.865922 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:23:22.869290 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:23:22.873506 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:23:22.873736 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:23:22.880652 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:23:22.881372 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:23:22.892865 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:23:22.892973 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:22.909912 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:23:22.910042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:22.926172 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:23:22.928412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:23:22.928544 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:22.938477 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:23:22.938587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:22.941404 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:23:22.941488 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:22.944175 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:23:22.944257 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:22.949060 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:22.978680 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:23:22.978975 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:22.986164 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:23:22.986323 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:22.994651 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:23:22.994728 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:22.997432 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:23:22.997520 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:23.011745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:23:23.011849 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:23.014419 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:23.014503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:23.034386 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:23:23.037010 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:23:23.037137 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:23.040223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:23.040308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:23.067429 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:23:23.069040 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:23:23.080391 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:23:23.080922 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:23:23.085816 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:23:23.104853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:23:23.121500 systemd[1]: Switching root. Apr 13 19:23:23.164831 systemd-journald[251]: Journal stopped Apr 13 19:23:25.057752 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 13 19:23:25.057901 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:23:25.057955 kernel: SELinux: policy capability open_perms=1 Apr 13 19:23:25.059495 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:23:25.059546 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:23:25.059578 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:23:25.059610 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:23:25.059644 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:23:25.059675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:23:25.059719 kernel: audit: type=1403 audit(1776108203.414:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:23:25.059762 systemd[1]: Successfully loaded SELinux policy in 53.537ms. Apr 13 19:23:25.059806 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.607ms. Apr 13 19:23:25.059840 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:25.059874 systemd[1]: Detected virtualization amazon. Apr 13 19:23:25.059913 systemd[1]: Detected architecture arm64. Apr 13 19:23:25.059945 systemd[1]: Detected first boot. Apr 13 19:23:25.059978 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:25.060126 zram_generator::config[1489]: No configuration found. Apr 13 19:23:25.060171 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:23:25.060205 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:23:25.060238 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:23:25.060272 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:25.060305 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:23:25.060341 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:23:25.060375 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:23:25.060408 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:23:25.060440 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:23:25.060475 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:23:25.060509 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:23:25.060539 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:23:25.060571 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:25.060603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:25.060634 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:23:25.060676 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:23:25.060709 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:23:25.060746 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:25.060779 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 19:23:25.060812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:25.060842 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:23:25.060875 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:23:25.060906 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:25.060936 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:23:25.060968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:25.063630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:25.064368 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:25.067005 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:25.067070 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:23:25.067107 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:23:25.067141 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:25.067175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:25.067205 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:25.067235 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:23:25.067266 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:23:25.067306 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:23:25.067337 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:23:25.067384 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:23:25.067425 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:23:25.067456 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:23:25.067489 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:23:25.067520 systemd[1]: Reached target machines.target - Containers. Apr 13 19:23:25.067551 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:23:25.067586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:25.067619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:25.067649 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:23:25.067679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:25.067713 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:25.067779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:25.067815 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:23:25.067849 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:25.069832 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:23:25.070594 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:23:25.070631 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:23:25.070664 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:23:25.070694 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:23:25.070727 kernel: fuse: init (API version 7.39) Apr 13 19:23:25.070758 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:25.070791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:25.070821 kernel: loop: module loaded Apr 13 19:23:25.070850 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:23:25.070888 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:23:25.070919 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:25.070951 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:23:25.073006 systemd[1]: Stopped verity-setup.service. Apr 13 19:23:25.073064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:23:25.073096 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:23:25.073130 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:23:25.073160 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:23:25.073198 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:23:25.073228 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:23:25.073261 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:23:25.073291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:25.073322 kernel: ACPI: bus type drm_connector registered Apr 13 19:23:25.073356 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:23:25.073386 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:23:25.073416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:25.073446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:25.073476 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:25.073507 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:25.073537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:25.073658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:25.074418 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:23:25.074828 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:23:25.075181 systemd-journald[1575]: Collecting audit messages is disabled. Apr 13 19:23:25.075240 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:25.075273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:25.075309 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:25.075343 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:23:25.075390 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:23:25.075424 systemd-journald[1575]: Journal started Apr 13 19:23:25.075471 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2d8105dac1d1f2c6c9d17dff22f4d8) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:24.412326 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:23:24.439601 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 19:23:24.440424 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:23:25.081023 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:25.109514 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:23:25.121341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:23:25.133722 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:23:25.136353 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:23:25.136425 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:25.144052 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:23:25.153305 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:23:25.168380 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:23:25.170973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:25.185214 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:23:25.202228 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:23:25.206188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:25.225450 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:23:25.230336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:25.243318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:25.253336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:23:25.262229 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:23:25.270443 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:23:25.275591 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:23:25.282111 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:23:25.320372 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2d8105dac1d1f2c6c9d17dff22f4d8 is 122.163ms for 899 entries. Apr 13 19:23:25.320372 systemd-journald[1575]: System Journal (/var/log/journal/ec2d8105dac1d1f2c6c9d17dff22f4d8) is 8.0M, max 195.6M, 187.6M free. Apr 13 19:23:25.471237 systemd-journald[1575]: Received client request to flush runtime journal. Apr 13 19:23:25.476103 kernel: loop0: detected capacity change from 0 to 209336 Apr 13 19:23:25.343718 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:23:25.347966 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:23:25.365918 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:23:25.391114 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:25.412540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:25.433402 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:23:25.473094 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:23:25.486368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:25.491169 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:23:25.514469 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:23:25.518949 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:23:25.523080 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:23:25.565389 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Apr 13 19:23:25.565421 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Apr 13 19:23:25.576281 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:25.657032 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:23:25.687297 kernel: loop1: detected capacity change from 0 to 52536 Apr 13 19:23:25.796425 kernel: loop2: detected capacity change from 0 to 114432 Apr 13 19:23:25.849140 kernel: loop3: detected capacity change from 0 to 114328 Apr 13 19:23:25.903952 kernel: loop4: detected capacity change from 0 to 209336 Apr 13 19:23:25.934162 kernel: loop5: detected capacity change from 0 to 52536 Apr 13 19:23:25.959079 kernel: loop6: detected capacity change from 0 to 114432 Apr 13 19:23:25.984015 kernel: loop7: detected capacity change from 0 to 114328 Apr 13 19:23:25.989236 ldconfig[1613]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:23:25.996702 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:23:26.002177 (sd-merge)[1645]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 19:23:26.003433 (sd-merge)[1645]: Merged extensions into '/usr'. Apr 13 19:23:26.011466 systemd[1]: Reloading requested from client PID 1618 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:23:26.011499 systemd[1]: Reloading... Apr 13 19:23:26.187034 zram_generator::config[1671]: No configuration found. Apr 13 19:23:26.479625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:26.590825 systemd[1]: Reloading finished in 578 ms. Apr 13 19:23:26.628558 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:23:26.631847 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:23:26.649332 systemd[1]: Starting ensure-sysext.service... Apr 13 19:23:26.660414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:26.666357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:26.696186 systemd[1]: Reloading requested from client PID 1723 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:23:26.696220 systemd[1]: Reloading... Apr 13 19:23:26.715260 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:23:26.715965 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:23:26.717827 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:23:26.718426 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. Apr 13 19:23:26.718561 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. Apr 13 19:23:26.726299 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:26.726327 systemd-tmpfiles[1724]: Skipping /boot Apr 13 19:23:26.765602 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:26.765633 systemd-tmpfiles[1724]: Skipping /boot Apr 13 19:23:26.801230 systemd-udevd[1725]: Using default interface naming scheme 'v255'. Apr 13 19:23:26.873030 zram_generator::config[1752]: No configuration found. Apr 13 19:23:27.090567 (udev-worker)[1761]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:27.264012 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1772) Apr 13 19:23:27.288274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:27.451190 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 19:23:27.451314 systemd[1]: Reloading finished in 754 ms. Apr 13 19:23:27.491525 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:27.497080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:27.618934 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:23:27.630938 systemd[1]: Finished ensure-sysext.service. Apr 13 19:23:27.649663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:27.661282 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:27.676121 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:23:27.679156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:27.688507 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:23:27.696725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:27.712540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:27.720659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:27.728039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:27.733927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:27.738314 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:23:27.746358 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:23:27.758334 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:27.780017 lvm[1923]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:27.767608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:27.770145 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:23:27.786389 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:23:27.796351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:27.844773 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:27.848114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:27.857939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:27.859442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:27.873174 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:23:27.876796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:27.878014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:27.882648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:27.912674 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:23:27.917263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:27.928288 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:23:27.936549 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:27.937394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:27.940756 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:23:27.945599 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:27.947414 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:23:27.977804 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:23:27.992727 lvm[1953]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:28.001158 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:23:28.010145 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:23:28.027884 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:23:28.033840 augenrules[1961]: No rules Apr 13 19:23:28.042872 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:28.062382 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:23:28.083429 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:23:28.092633 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:23:28.157910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:28.223850 systemd-networkd[1936]: lo: Link UP Apr 13 19:23:28.223876 systemd-networkd[1936]: lo: Gained carrier Apr 13 19:23:28.226783 systemd-networkd[1936]: Enumeration completed Apr 13 19:23:28.227028 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:28.229515 systemd-networkd[1936]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:28.229536 systemd-networkd[1936]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:28.235165 systemd-networkd[1936]: eth0: Link UP Apr 13 19:23:28.235604 systemd-networkd[1936]: eth0: Gained carrier Apr 13 19:23:28.235641 systemd-networkd[1936]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:28.237434 systemd-resolved[1937]: Positive Trust Anchors: Apr 13 19:23:28.237737 systemd-resolved[1937]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:28.237803 systemd-resolved[1937]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:28.244403 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:23:28.254125 systemd-networkd[1936]: eth0: DHCPv4 address 172.31.27.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:28.256947 systemd-resolved[1937]: Defaulting to hostname 'linux'. Apr 13 19:23:28.260598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:28.263315 systemd[1]: Reached target network.target - Network. Apr 13 19:23:28.265400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:28.268083 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:28.270630 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:23:28.273472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:23:28.276666 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:23:28.279417 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:23:28.282469 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:23:28.285446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:23:28.285493 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:28.287544 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:28.291179 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:23:28.296616 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:23:28.311469 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:23:28.314979 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:23:28.317659 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:28.320013 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:28.322233 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:28.322296 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:28.330153 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:23:28.336569 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:23:28.345275 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:23:28.355374 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:23:28.362333 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:23:28.366190 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:23:28.371376 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:23:28.377170 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 19:23:28.384310 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:23:28.393157 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 19:23:28.402360 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:23:28.412226 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:23:28.423346 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:23:28.429905 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:23:28.430795 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:23:28.437242 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:23:28.445124 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:23:28.454707 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:23:28.457065 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:23:28.501078 extend-filesystems[1988]: Found loop4 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found loop5 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found loop6 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found loop7 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1p1 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1p2 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1p3 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found usr Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1p4 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1p6 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1p7 Apr 13 19:23:28.501078 extend-filesystems[1988]: Found nvme0n1p9 Apr 13 19:23:28.501078 extend-filesystems[1988]: Checking size of /dev/nvme0n1p9 Apr 13 19:23:28.543846 jq[1987]: false Apr 13 19:23:28.552503 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:23:28.553437 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:23:28.622791 dbus-daemon[1986]: [system] SELinux support is enabled Apr 13 19:23:28.623469 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:23:28.633936 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:23:28.634184 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:23:28.640855 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:23:28.640915 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:23:28.657014 jq[1999]: true Apr 13 19:23:28.664475 dbus-daemon[1986]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1936 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:28.680546 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 19:23:28.687427 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: ---------------------------------------------------- Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: corporation. Support and training for ntp-4 are Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: available at https://www.nwtime.org/support Apr 13 19:23:28.689517 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: ---------------------------------------------------- Apr 13 19:23:28.687493 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:28.687514 ntpd[1990]: ---------------------------------------------------- Apr 13 19:23:28.687533 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:28.687552 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:28.687571 ntpd[1990]: corporation. Support and training for ntp-4 are Apr 13 19:23:28.687591 ntpd[1990]: available at https://www.nwtime.org/support Apr 13 19:23:28.687609 ntpd[1990]: ---------------------------------------------------- Apr 13 19:23:28.695313 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 19:23:28.704055 extend-filesystems[1988]: Resized partition /dev/nvme0n1p9 Apr 13 19:23:28.716731 ntpd[1990]: proto: precision = 0.096 usec (-23) Apr 13 19:23:28.716876 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: proto: precision = 0.096 usec (-23) Apr 13 19:23:28.720534 ntpd[1990]: basedate set to 2026-04-01 Apr 13 19:23:28.720587 ntpd[1990]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:28.720800 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: basedate set to 2026-04-01 Apr 13 19:23:28.720800 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:28.729913 extend-filesystems[2031]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:23:28.751939 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 19:23:28.752173 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:28.752173 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:28.740520 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:28.733170 (ntainerd)[2017]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:23:28.740609 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:28.735675 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:23:28.754697 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:28.754697 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Listen normally on 3 eth0 172.31.27.52:123 Apr 13 19:23:28.754697 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:28.754336 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:28.737089 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:23:28.756387 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: bind(21) AF_INET6 fe80::469:96ff:fec8:b7eb%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:28.756387 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: unable to create socket on eth0 (5) for fe80::469:96ff:fec8:b7eb%2#123 Apr 13 19:23:28.756387 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: failed to init interface for address fe80::469:96ff:fec8:b7eb%2 Apr 13 19:23:28.756387 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Apr 13 19:23:28.754416 ntpd[1990]: Listen normally on 3 eth0 172.31.27.52:123 Apr 13 19:23:28.754484 ntpd[1990]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:28.754759 ntpd[1990]: bind(21) AF_INET6 fe80::469:96ff:fec8:b7eb%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:28.754802 ntpd[1990]: unable to create socket on eth0 (5) for fe80::469:96ff:fec8:b7eb%2#123 Apr 13 19:23:28.754831 ntpd[1990]: failed to init interface for address fe80::469:96ff:fec8:b7eb%2 Apr 13 19:23:28.754889 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Apr 13 19:23:28.778512 tar[2013]: linux-arm64/LICENSE Apr 13 19:23:28.778512 tar[2013]: linux-arm64/helm Apr 13 19:23:28.787778 coreos-metadata[1985]: Apr 13 19:23:28.786 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:28.800013 jq[2027]: true Apr 13 19:23:28.805835 coreos-metadata[1985]: Apr 13 19:23:28.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 19:23:28.811078 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.811148 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.811426 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.811426 ntpd[1990]: 13 Apr 19:23:28 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:28.814456 coreos-metadata[1985]: Apr 13 19:23:28.814 INFO Fetch successful Apr 13 19:23:28.814456 coreos-metadata[1985]: Apr 13 19:23:28.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 19:23:28.820426 coreos-metadata[1985]: Apr 13 19:23:28.820 INFO Fetch successful Apr 13 19:23:28.820426 coreos-metadata[1985]: Apr 13 19:23:28.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 19:23:28.826325 coreos-metadata[1985]: Apr 13 19:23:28.822 INFO Fetch successful Apr 13 19:23:28.826325 coreos-metadata[1985]: Apr 13 19:23:28.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 19:23:28.826325 coreos-metadata[1985]: Apr 13 19:23:28.826 INFO Fetch successful Apr 13 19:23:28.826325 coreos-metadata[1985]: Apr 13 19:23:28.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 19:23:28.832956 coreos-metadata[1985]: Apr 13 19:23:28.832 INFO Fetch failed with 404: resource not found Apr 13 19:23:28.832956 coreos-metadata[1985]: Apr 13 19:23:28.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 19:23:28.838326 coreos-metadata[1985]: Apr 13 19:23:28.838 INFO Fetch successful Apr 13 19:23:28.838326 coreos-metadata[1985]: Apr 13 19:23:28.838 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 19:23:28.852814 coreos-metadata[1985]: Apr 13 19:23:28.851 INFO Fetch successful Apr 13 19:23:28.852814 coreos-metadata[1985]: Apr 13 19:23:28.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 19:23:28.860908 coreos-metadata[1985]: Apr 13 19:23:28.855 INFO Fetch successful Apr 13 19:23:28.860908 coreos-metadata[1985]: Apr 13 19:23:28.855 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 19:23:28.862571 coreos-metadata[1985]: Apr 13 19:23:28.862 INFO Fetch successful Apr 13 19:23:28.862571 coreos-metadata[1985]: Apr 13 19:23:28.862 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 19:23:28.872197 coreos-metadata[1985]: Apr 13 19:23:28.865 INFO Fetch successful Apr 13 19:23:28.887816 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 19:23:28.894699 update_engine[1998]: I20260413 19:23:28.886951 1998 main.cc:92] Flatcar Update Engine starting Apr 13 19:23:28.899922 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 19:23:28.902451 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:23:28.939256 update_engine[1998]: I20260413 19:23:28.902873 1998 update_check_scheduler.cc:74] Next update check in 3m53s Apr 13 19:23:28.909516 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:23:28.945124 extend-filesystems[2031]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 19:23:28.945124 extend-filesystems[2031]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 19:23:28.945124 extend-filesystems[2031]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 19:23:28.964726 extend-filesystems[1988]: Resized filesystem in /dev/nvme0n1p9 Apr 13 19:23:28.965290 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:23:28.967849 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:23:29.044483 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:23:29.048830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:23:29.060059 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1787) Apr 13 19:23:29.099373 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:29.108071 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:23:29.128697 systemd[1]: Starting sshkeys.service... Apr 13 19:23:29.179070 systemd-logind[1997]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:23:29.179152 systemd-logind[1997]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 13 19:23:29.181294 systemd-logind[1997]: New seat seat0. Apr 13 19:23:29.187412 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:23:29.229646 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:23:29.264138 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:23:29.299124 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:23:29.378948 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 19:23:29.379542 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 19:23:29.380127 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:29.431333 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 19:23:29.506208 polkitd[2121]: Started polkitd version 121 Apr 13 19:23:29.546688 polkitd[2121]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 19:23:29.553383 polkitd[2121]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 19:23:29.554331 polkitd[2121]: Finished loading, compiling and executing 2 rules Apr 13 19:23:29.556919 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 19:23:29.557236 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 19:23:29.561612 polkitd[2121]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 19:23:29.591572 systemd-networkd[1936]: eth0: Gained IPv6LL Apr 13 19:23:29.608734 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:23:29.612965 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:23:29.622352 systemd-hostnamed[2030]: Hostname set to (transient) Apr 13 19:23:29.624061 systemd-resolved[1937]: System hostname changed to 'ip-172-31-27-52'. Apr 13 19:23:29.671515 coreos-metadata[2103]: Apr 13 19:23:29.648 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:29.671515 coreos-metadata[2103]: Apr 13 19:23:29.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 19:23:29.671515 coreos-metadata[2103]: Apr 13 19:23:29.652 INFO Fetch successful Apr 13 19:23:29.671515 coreos-metadata[2103]: Apr 13 19:23:29.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 19:23:29.671515 coreos-metadata[2103]: Apr 13 19:23:29.656 INFO Fetch successful Apr 13 19:23:29.660626 unknown[2103]: wrote ssh authorized keys file for user: core Apr 13 19:23:29.668926 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 19:23:29.671345 locksmithd[2048]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:23:29.677450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:29.689583 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:23:29.787279 sshd_keygen[2018]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:23:29.804018 containerd[2017]: time="2026-04-13T19:23:29.802726297Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:23:29.816325 amazon-ssm-agent[2161]: Initializing new seelog logger Apr 13 19:23:29.820015 amazon-ssm-agent[2161]: New Seelog Logger Creation Complete Apr 13 19:23:29.820015 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.820015 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.820015 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:29.821217 amazon-ssm-agent[2161]: 2026-04-13 19:23:29 INFO Proxy environment variables: Apr 13 19:23:29.821331 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.821429 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.821650 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:29.825706 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.828346 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.829395 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:29.840353 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.840353 amazon-ssm-agent[2161]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:29.840353 amazon-ssm-agent[2161]: 2026/04/13 19:23:29 processing appconfig overrides Apr 13 19:23:29.848421 update-ssh-keys[2171]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:29.851044 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:23:29.861710 systemd[1]: Finished sshkeys.service. Apr 13 19:23:29.925015 amazon-ssm-agent[2161]: 2026-04-13 19:23:29 INFO https_proxy: Apr 13 19:23:29.954111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:23:30.022531 amazon-ssm-agent[2161]: 2026-04-13 19:23:29 INFO http_proxy: Apr 13 19:23:30.047475 containerd[2017]: time="2026-04-13T19:23:30.045974843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.049107 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:23:30.057389 containerd[2017]: time="2026-04-13T19:23:30.057299195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.057389 containerd[2017]: time="2026-04-13T19:23:30.057377111Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:23:30.057577 containerd[2017]: time="2026-04-13T19:23:30.057414791Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.057770 containerd[2017]: time="2026-04-13T19:23:30.057720767Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:23:30.057845 containerd[2017]: time="2026-04-13T19:23:30.057769511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.057944 containerd[2017]: time="2026-04-13T19:23:30.057898391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.058093 containerd[2017]: time="2026-04-13T19:23:30.057938975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.062527 containerd[2017]: time="2026-04-13T19:23:30.062443451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.062527 containerd[2017]: time="2026-04-13T19:23:30.062510939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.062683 containerd[2017]: time="2026-04-13T19:23:30.062548691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.062683 containerd[2017]: time="2026-04-13T19:23:30.062574503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.062801 containerd[2017]: time="2026-04-13T19:23:30.062782079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.069094 containerd[2017]: time="2026-04-13T19:23:30.063760475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:30.069094 containerd[2017]: time="2026-04-13T19:23:30.067952339Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:30.069094 containerd[2017]: time="2026-04-13T19:23:30.068055923Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:23:30.069094 containerd[2017]: time="2026-04-13T19:23:30.068282975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:23:30.069094 containerd[2017]: time="2026-04-13T19:23:30.068384039Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:23:30.064004 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:23:30.080519 systemd[1]: Started sshd@0-172.31.27.52:22-4.175.71.9:49182.service - OpenSSH per-connection server daemon (4.175.71.9:49182). Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.088078499Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.088207883Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.088244543Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.088280195Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.088312751Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.088580627Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.088974647Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.089227955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.089263895Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.089294435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.089333759Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.089365199Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.089395763Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091016 containerd[2017]: time="2026-04-13T19:23:30.089427095Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089459711Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089489747Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089518595Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089549147Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089590847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089622887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089652263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089684639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089713943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089743979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089772659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089802347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089834015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.091689 containerd[2017]: time="2026-04-13T19:23:30.089867171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.092440 containerd[2017]: time="2026-04-13T19:23:30.089898419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.092440 containerd[2017]: time="2026-04-13T19:23:30.089933999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.092440 containerd[2017]: time="2026-04-13T19:23:30.089966507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.097217639Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.097295783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.097328999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.097357631Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.097616699Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.099504227Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.099559715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.099597647Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.099622859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.099654791Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.099678959Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:23:30.100764 containerd[2017]: time="2026-04-13T19:23:30.099704879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:23:30.110082 containerd[2017]: time="2026-04-13T19:23:30.106792235Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:23:30.110082 containerd[2017]: time="2026-04-13T19:23:30.106931759Z" level=info msg="Connect containerd service" Apr 13 19:23:30.110082 containerd[2017]: time="2026-04-13T19:23:30.107028839Z" level=info msg="using legacy CRI server" Apr 13 19:23:30.110082 containerd[2017]: time="2026-04-13T19:23:30.107050475Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:23:30.110082 containerd[2017]: time="2026-04-13T19:23:30.107192423Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:23:30.120797 containerd[2017]: time="2026-04-13T19:23:30.120713315Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:23:30.121604 containerd[2017]: time="2026-04-13T19:23:30.121071107Z" level=info msg="Start subscribing containerd event" Apr 13 19:23:30.121604 containerd[2017]: time="2026-04-13T19:23:30.121164023Z" level=info msg="Start recovering state" Apr 13 19:23:30.121604 containerd[2017]: time="2026-04-13T19:23:30.121294199Z" level=info msg="Start event monitor" Apr 13 19:23:30.121604 containerd[2017]: time="2026-04-13T19:23:30.121319567Z" level=info msg="Start snapshots syncer" Apr 13 19:23:30.121604 containerd[2017]: time="2026-04-13T19:23:30.121341443Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:23:30.121604 containerd[2017]: time="2026-04-13T19:23:30.121360367Z" level=info msg="Start streaming server" Apr 13 19:23:30.121842 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:23:30.122785 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:23:30.126338 amazon-ssm-agent[2161]: 2026-04-13 19:23:29 INFO no_proxy: Apr 13 19:23:30.135788 containerd[2017]: time="2026-04-13T19:23:30.134865335Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:23:30.135788 containerd[2017]: time="2026-04-13T19:23:30.135016115Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:23:30.135557 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:23:30.170905 containerd[2017]: time="2026-04-13T19:23:30.166169771Z" level=info msg="containerd successfully booted in 0.371691s" Apr 13 19:23:30.166296 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:23:30.209043 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:23:30.225111 amazon-ssm-agent[2161]: 2026-04-13 19:23:29 INFO Checking if agent identity type OnPrem can be assumed Apr 13 19:23:30.225262 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:23:30.239297 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 19:23:30.242219 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:23:30.328179 amazon-ssm-agent[2161]: 2026-04-13 19:23:29 INFO Checking if agent identity type EC2 can be assumed Apr 13 19:23:30.426723 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO Agent will take identity from EC2 Apr 13 19:23:30.526020 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:30.625381 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:30.725005 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:30.784678 tar[2013]: linux-arm64/README.md Apr 13 19:23:30.803760 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:23:30.824154 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [Registrar] Starting registrar module Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [EC2Identity] EC2 registration was successful. Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [CredentialRefresher] credentialRefresher has started Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 19:23:30.838231 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 19:23:30.923394 amazon-ssm-agent[2161]: 2026-04-13 19:23:30 INFO [CredentialRefresher] Next credential rotation will be in 31.8749906311 minutes Apr 13 19:23:31.155028 sshd[2215]: Accepted publickey for core from 4.175.71.9 port 49182 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:31.158849 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:31.179433 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:23:31.188492 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:23:31.197105 systemd-logind[1997]: New session 1 of user core. Apr 13 19:23:31.224318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:23:31.238690 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:23:31.264811 (systemd)[2233]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:23:31.491722 systemd[2233]: Queued start job for default target default.target. Apr 13 19:23:31.504201 systemd[2233]: Created slice app.slice - User Application Slice. Apr 13 19:23:31.504269 systemd[2233]: Reached target paths.target - Paths. Apr 13 19:23:31.504302 systemd[2233]: Reached target timers.target - Timers. Apr 13 19:23:31.509214 systemd[2233]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:23:31.536497 systemd[2233]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:23:31.536743 systemd[2233]: Reached target sockets.target - Sockets. Apr 13 19:23:31.536776 systemd[2233]: Reached target basic.target - Basic System. Apr 13 19:23:31.536880 systemd[2233]: Reached target default.target - Main User Target. Apr 13 19:23:31.536948 systemd[2233]: Startup finished in 259ms. Apr 13 19:23:31.537239 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:23:31.549249 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:23:31.696372 ntpd[1990]: Listen normally on 6 eth0 [fe80::469:96ff:fec8:b7eb%2]:123 Apr 13 19:23:31.697226 ntpd[1990]: 13 Apr 19:23:31 ntpd[1990]: Listen normally on 6 eth0 [fe80::469:96ff:fec8:b7eb%2]:123 Apr 13 19:23:31.867860 amazon-ssm-agent[2161]: 2026-04-13 19:23:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 19:23:31.969644 amazon-ssm-agent[2161]: 2026-04-13 19:23:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2243) started Apr 13 19:23:32.069451 amazon-ssm-agent[2161]: 2026-04-13 19:23:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 19:23:32.260556 systemd[1]: Started sshd@1-172.31.27.52:22-4.175.71.9:49198.service - OpenSSH per-connection server daemon (4.175.71.9:49198). Apr 13 19:23:33.257364 sshd[2255]: Accepted publickey for core from 4.175.71.9 port 49198 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:33.260222 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:33.269468 systemd-logind[1997]: New session 2 of user core. Apr 13 19:23:33.279248 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:23:33.650326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:33.653966 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:23:33.659047 systemd[1]: Startup finished in 1.191s (kernel) + 10.595s (initrd) + 10.296s (userspace) = 22.084s. Apr 13 19:23:33.665616 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:33.945310 sshd[2255]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:33.952862 systemd[1]: sshd@1-172.31.27.52:22-4.175.71.9:49198.service: Deactivated successfully. Apr 13 19:23:33.956493 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:23:33.957917 systemd-logind[1997]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:23:33.961832 systemd-logind[1997]: Removed session 2. Apr 13 19:23:34.120460 systemd[1]: Started sshd@2-172.31.27.52:22-4.175.71.9:49204.service - OpenSSH per-connection server daemon (4.175.71.9:49204). Apr 13 19:23:35.123219 sshd[2272]: Accepted publickey for core from 4.175.71.9 port 49204 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:35.125850 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:35.138579 systemd-logind[1997]: New session 3 of user core. Apr 13 19:23:35.145294 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:23:35.488879 kubelet[2263]: E0413 19:23:35.488787 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:35.492348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:35.492651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:35.493526 systemd[1]: kubelet.service: Consumed 1.367s CPU time. Apr 13 19:23:36.183414 systemd-resolved[1937]: Clock change detected. Flushing caches. Apr 13 19:23:36.284680 sshd[2272]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:36.290149 systemd[1]: sshd@2-172.31.27.52:22-4.175.71.9:49204.service: Deactivated successfully. Apr 13 19:23:36.292986 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:23:36.297932 systemd-logind[1997]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:23:36.300013 systemd-logind[1997]: Removed session 3. Apr 13 19:23:36.464968 systemd[1]: Started sshd@3-172.31.27.52:22-4.175.71.9:36528.service - OpenSSH per-connection server daemon (4.175.71.9:36528). Apr 13 19:23:37.473732 sshd[2285]: Accepted publickey for core from 4.175.71.9 port 36528 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:37.476308 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:37.485581 systemd-logind[1997]: New session 4 of user core. Apr 13 19:23:37.496735 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:23:38.166838 sshd[2285]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:38.173027 systemd[1]: sshd@3-172.31.27.52:22-4.175.71.9:36528.service: Deactivated successfully. Apr 13 19:23:38.177065 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:23:38.180281 systemd-logind[1997]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:23:38.182638 systemd-logind[1997]: Removed session 4. Apr 13 19:23:38.337913 systemd[1]: Started sshd@4-172.31.27.52:22-4.175.71.9:36530.service - OpenSSH per-connection server daemon (4.175.71.9:36530). Apr 13 19:23:39.349495 sshd[2292]: Accepted publickey for core from 4.175.71.9 port 36530 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:39.351213 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:39.358350 systemd-logind[1997]: New session 5 of user core. Apr 13 19:23:39.369721 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:23:39.895884 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:23:39.897198 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:39.916115 sudo[2295]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:40.079896 sshd[2292]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:40.087257 systemd[1]: sshd@4-172.31.27.52:22-4.175.71.9:36530.service: Deactivated successfully. Apr 13 19:23:40.090646 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:23:40.093757 systemd-logind[1997]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:23:40.095572 systemd-logind[1997]: Removed session 5. Apr 13 19:23:40.257674 systemd[1]: Started sshd@5-172.31.27.52:22-4.175.71.9:36538.service - OpenSSH per-connection server daemon (4.175.71.9:36538). Apr 13 19:23:41.273009 sshd[2300]: Accepted publickey for core from 4.175.71.9 port 36538 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:41.274789 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:41.282930 systemd-logind[1997]: New session 6 of user core. Apr 13 19:23:41.292712 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:23:41.803656 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:23:41.804329 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:41.810752 sudo[2304]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:41.820702 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:23:41.821358 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:41.844974 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:41.850269 auditctl[2307]: No rules Apr 13 19:23:41.852429 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:23:41.854538 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:41.866083 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:41.908786 augenrules[2325]: No rules Apr 13 19:23:41.910355 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:41.914037 sudo[2303]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:42.076364 sshd[2300]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:42.084369 systemd[1]: sshd@5-172.31.27.52:22-4.175.71.9:36538.service: Deactivated successfully. Apr 13 19:23:42.087951 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:23:42.090539 systemd-logind[1997]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:23:42.092233 systemd-logind[1997]: Removed session 6. Apr 13 19:23:42.261934 systemd[1]: Started sshd@6-172.31.27.52:22-4.175.71.9:36554.service - OpenSSH per-connection server daemon (4.175.71.9:36554). Apr 13 19:23:43.258901 sshd[2333]: Accepted publickey for core from 4.175.71.9 port 36554 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:43.261513 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:43.269987 systemd-logind[1997]: New session 7 of user core. Apr 13 19:23:43.276706 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:23:43.791371 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:23:43.792108 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:44.277943 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:23:44.291955 (dockerd)[2351]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:23:44.697402 dockerd[2351]: time="2026-04-13T19:23:44.697302670Z" level=info msg="Starting up" Apr 13 19:23:44.822850 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3613883470-merged.mount: Deactivated successfully. Apr 13 19:23:44.868980 systemd[1]: var-lib-docker-metacopy\x2dcheck4032437443-merged.mount: Deactivated successfully. Apr 13 19:23:44.888217 dockerd[2351]: time="2026-04-13T19:23:44.887010647Z" level=info msg="Loading containers: start." Apr 13 19:23:45.057501 kernel: Initializing XFRM netlink socket Apr 13 19:23:45.093798 (udev-worker)[2372]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:45.179472 systemd-networkd[1936]: docker0: Link UP Apr 13 19:23:45.212529 dockerd[2351]: time="2026-04-13T19:23:45.211852856Z" level=info msg="Loading containers: done." Apr 13 19:23:45.245906 dockerd[2351]: time="2026-04-13T19:23:45.245841861Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:23:45.246274 dockerd[2351]: time="2026-04-13T19:23:45.246002673Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:23:45.246274 dockerd[2351]: time="2026-04-13T19:23:45.246190797Z" level=info msg="Daemon has completed initialization" Apr 13 19:23:45.312502 dockerd[2351]: time="2026-04-13T19:23:45.312238593Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:23:45.313926 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:23:46.132270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:46.138813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:46.600788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:46.609607 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:46.682567 kubelet[2496]: E0413 19:23:46.682432 2496 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:46.691945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:46.692263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:47.283479 containerd[2017]: time="2026-04-13T19:23:47.283349675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 19:23:48.033855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239182151.mount: Deactivated successfully. Apr 13 19:23:49.593496 containerd[2017]: time="2026-04-13T19:23:49.593406134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:49.595886 containerd[2017]: time="2026-04-13T19:23:49.595552802Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=27283683" Apr 13 19:23:49.598342 containerd[2017]: time="2026-04-13T19:23:49.598251398Z" level=info msg="ImageCreate event name:\"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:49.605327 containerd[2017]: time="2026-04-13T19:23:49.604521878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:49.606839 containerd[2017]: time="2026-04-13T19:23:49.606778586Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"27280282\" in 2.323357043s" Apr 13 19:23:49.606955 containerd[2017]: time="2026-04-13T19:23:49.606840218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\"" Apr 13 19:23:49.608085 containerd[2017]: time="2026-04-13T19:23:49.608024438Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 19:23:51.466190 containerd[2017]: time="2026-04-13T19:23:51.466134867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:51.472499 containerd[2017]: time="2026-04-13T19:23:51.470193519Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=23551902" Apr 13 19:23:51.475984 containerd[2017]: time="2026-04-13T19:23:51.475921263Z" level=info msg="ImageCreate event name:\"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:51.483705 containerd[2017]: time="2026-04-13T19:23:51.483648844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:51.486034 containerd[2017]: time="2026-04-13T19:23:51.485967388Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"25029924\" in 1.877875426s" Apr 13 19:23:51.486239 containerd[2017]: time="2026-04-13T19:23:51.486030112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\"" Apr 13 19:23:51.487054 containerd[2017]: time="2026-04-13T19:23:51.486949948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 19:23:53.052352 containerd[2017]: time="2026-04-13T19:23:53.052269831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.056181 containerd[2017]: time="2026-04-13T19:23:53.055780359Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=18301233" Apr 13 19:23:53.059225 containerd[2017]: time="2026-04-13T19:23:53.058411059Z" level=info msg="ImageCreate event name:\"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.064765 containerd[2017]: time="2026-04-13T19:23:53.064713123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.067072 containerd[2017]: time="2026-04-13T19:23:53.067005675Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"19779273\" in 1.579996051s" Apr 13 19:23:53.067072 containerd[2017]: time="2026-04-13T19:23:53.067067883Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\"" Apr 13 19:23:53.067803 containerd[2017]: time="2026-04-13T19:23:53.067742607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 19:23:54.385607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058297364.mount: Deactivated successfully. Apr 13 19:23:55.032731 containerd[2017]: time="2026-04-13T19:23:55.032640953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.035831 containerd[2017]: time="2026-04-13T19:23:55.035756525Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=28148953" Apr 13 19:23:55.040835 containerd[2017]: time="2026-04-13T19:23:55.040689197Z" level=info msg="ImageCreate event name:\"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.048494 containerd[2017]: time="2026-04-13T19:23:55.047231477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.050969 containerd[2017]: time="2026-04-13T19:23:55.050514377Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"28147972\" in 1.982711614s" Apr 13 19:23:55.050969 containerd[2017]: time="2026-04-13T19:23:55.050577809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\"" Apr 13 19:23:55.051813 containerd[2017]: time="2026-04-13T19:23:55.051437453Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 19:23:55.686174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858911750.mount: Deactivated successfully. Apr 13 19:23:56.883129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:23:56.894858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:56.972630 containerd[2017]: time="2026-04-13T19:23:56.970526999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.975666 containerd[2017]: time="2026-04-13T19:23:56.975592835Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 13 19:23:56.979275 containerd[2017]: time="2026-04-13T19:23:56.979208483Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.988487 containerd[2017]: time="2026-04-13T19:23:56.988169651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:56.992143 containerd[2017]: time="2026-04-13T19:23:56.991945835Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.940424754s" Apr 13 19:23:56.992143 containerd[2017]: time="2026-04-13T19:23:56.992008391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 13 19:23:56.993000 containerd[2017]: time="2026-04-13T19:23:56.992834615Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 19:23:57.262644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:57.276238 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:57.346479 kubelet[2638]: E0413 19:23:57.346388 2638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:57.352849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:57.353307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:57.550281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2837114170.mount: Deactivated successfully. Apr 13 19:23:57.562525 containerd[2017]: time="2026-04-13T19:23:57.561755926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.565076 containerd[2017]: time="2026-04-13T19:23:57.564668818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 13 19:23:57.567488 containerd[2017]: time="2026-04-13T19:23:57.567220810Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.574372 containerd[2017]: time="2026-04-13T19:23:57.572545702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.574372 containerd[2017]: time="2026-04-13T19:23:57.574169110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 581.275047ms" Apr 13 19:23:57.574372 containerd[2017]: time="2026-04-13T19:23:57.574219942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 13 19:23:57.576012 containerd[2017]: time="2026-04-13T19:23:57.575971546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 19:23:58.179499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464216009.mount: Deactivated successfully. Apr 13 19:24:00.112961 containerd[2017]: time="2026-04-13T19:24:00.110655742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.115347 containerd[2017]: time="2026-04-13T19:24:00.115290610Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Apr 13 19:24:00.116068 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 19:24:00.119754 containerd[2017]: time="2026-04-13T19:24:00.119620210Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.130378 containerd[2017]: time="2026-04-13T19:24:00.130301938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.133583 containerd[2017]: time="2026-04-13T19:24:00.133015402Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.556795936s" Apr 13 19:24:00.133583 containerd[2017]: time="2026-04-13T19:24:00.133078018Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 13 19:24:07.382339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:24:07.391895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:07.728705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:07.749113 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:07.823485 kubelet[2743]: E0413 19:24:07.821066 2743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:07.826202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:07.826654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:08.478749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:08.488980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:08.548244 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-7.scope)... Apr 13 19:24:08.548273 systemd[1]: Reloading... Apr 13 19:24:08.788507 zram_generator::config[2801]: No configuration found. Apr 13 19:24:09.040008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:09.215168 systemd[1]: Reloading finished in 666 ms. Apr 13 19:24:09.308551 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:24:09.308749 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:24:09.309571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:09.318157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:09.663727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:09.665552 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:09.733497 kubelet[2861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:09.734495 kubelet[2861]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:09.734495 kubelet[2861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:09.734495 kubelet[2861]: I0413 19:24:09.734210 2861 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:10.217267 kubelet[2861]: I0413 19:24:10.217192 2861 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:24:10.217267 kubelet[2861]: I0413 19:24:10.217243 2861 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:10.217725 kubelet[2861]: I0413 19:24:10.217667 2861 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:10.262716 kubelet[2861]: E0413 19:24:10.262644 2861 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.27.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:10.264242 kubelet[2861]: I0413 19:24:10.264015 2861 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:10.279212 kubelet[2861]: E0413 19:24:10.279127 2861 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:10.279212 kubelet[2861]: I0413 19:24:10.279198 2861 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:10.286226 kubelet[2861]: I0413 19:24:10.286164 2861 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:24:10.286798 kubelet[2861]: I0413 19:24:10.286735 2861 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:10.287054 kubelet[2861]: I0413 19:24:10.286786 2861 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-52","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:24:10.287205 kubelet[2861]: I0413 19:24:10.287056 2861 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:10.287205 kubelet[2861]: I0413 19:24:10.287076 2861 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:24:10.287464 kubelet[2861]: I0413 19:24:10.287431 2861 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:10.293287 kubelet[2861]: I0413 19:24:10.293084 2861 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:24:10.293287 kubelet[2861]: I0413 19:24:10.293286 2861 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:10.293512 kubelet[2861]: I0413 19:24:10.293340 2861 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:24:10.293512 kubelet[2861]: I0413 19:24:10.293375 2861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:10.306510 kubelet[2861]: E0413 19:24:10.305973 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-52&limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:10.306668 kubelet[2861]: I0413 19:24:10.306588 2861 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:10.307802 kubelet[2861]: I0413 19:24:10.307754 2861 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:10.308475 kubelet[2861]: W0413 19:24:10.308015 2861 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:24:10.312965 kubelet[2861]: E0413 19:24:10.312912 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:10.314639 kubelet[2861]: I0413 19:24:10.314585 2861 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:24:10.314784 kubelet[2861]: I0413 19:24:10.314711 2861 server.go:1289] "Started kubelet" Apr 13 19:24:10.316089 kubelet[2861]: I0413 19:24:10.314895 2861 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:10.317629 kubelet[2861]: I0413 19:24:10.317594 2861 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:24:10.320499 kubelet[2861]: I0413 19:24:10.319051 2861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:10.320499 kubelet[2861]: I0413 19:24:10.319727 2861 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:10.323731 kubelet[2861]: E0413 19:24:10.319946 2861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.52:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.52:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-52.18a6010b1ea4b4e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-52,UID:ip-172-31-27-52,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-52,},FirstTimestamp:2026-04-13 19:24:10.314618085 +0000 UTC m=+0.640799548,LastTimestamp:2026-04-13 19:24:10.314618085 +0000 UTC m=+0.640799548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-52,}" Apr 13 19:24:10.329399 kubelet[2861]: I0413 19:24:10.326858 2861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:10.329399 kubelet[2861]: I0413 19:24:10.327016 2861 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:24:10.329399 kubelet[2861]: I0413 19:24:10.327169 2861 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:10.332161 kubelet[2861]: I0413 19:24:10.332123 2861 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:24:10.332378 kubelet[2861]: I0413 19:24:10.332359 2861 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:24:10.335504 kubelet[2861]: E0413 19:24:10.335430 2861 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-52\" not found" Apr 13 19:24:10.336119 kubelet[2861]: E0413 19:24:10.336016 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-52?timeout=10s\": dial tcp 172.31.27.52:6443: connect: connection refused" interval="200ms" Apr 13 19:24:10.336439 kubelet[2861]: E0413 19:24:10.336398 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:10.340101 kubelet[2861]: E0413 19:24:10.338337 2861 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:10.340101 kubelet[2861]: I0413 19:24:10.339320 2861 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:10.340101 kubelet[2861]: I0413 19:24:10.339482 2861 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:10.341713 kubelet[2861]: I0413 19:24:10.341664 2861 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:10.390055 kubelet[2861]: I0413 19:24:10.390017 2861 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:10.390259 kubelet[2861]: I0413 19:24:10.390237 2861 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:10.390459 kubelet[2861]: I0413 19:24:10.390392 2861 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:10.391898 kubelet[2861]: I0413 19:24:10.391851 2861 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:10.395514 kubelet[2861]: I0413 19:24:10.395443 2861 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:10.395725 kubelet[2861]: I0413 19:24:10.395704 2861 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:24:10.395906 kubelet[2861]: I0413 19:24:10.395884 2861 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:10.396079 kubelet[2861]: I0413 19:24:10.396011 2861 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:24:10.396545 kubelet[2861]: E0413 19:24:10.396202 2861 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:10.396545 kubelet[2861]: I0413 19:24:10.395948 2861 policy_none.go:49] "None policy: Start" Apr 13 19:24:10.396545 kubelet[2861]: I0413 19:24:10.396256 2861 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:24:10.396545 kubelet[2861]: I0413 19:24:10.396296 2861 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:24:10.399927 kubelet[2861]: E0413 19:24:10.399756 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:10.417980 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:24:10.435748 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:24:10.437511 kubelet[2861]: E0413 19:24:10.437252 2861 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-52\" not found" Apr 13 19:24:10.446728 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:24:10.458479 kubelet[2861]: E0413 19:24:10.457547 2861 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:10.458479 kubelet[2861]: I0413 19:24:10.457860 2861 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:10.458479 kubelet[2861]: I0413 19:24:10.457888 2861 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:10.459422 kubelet[2861]: I0413 19:24:10.459393 2861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:10.470502 kubelet[2861]: E0413 19:24:10.468302 2861 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:10.471615 kubelet[2861]: E0413 19:24:10.471578 2861 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-52\" not found" Apr 13 19:24:10.526595 systemd[1]: Created slice kubepods-burstable-pod78c6e2b87648942011cad28e5bbc6a7c.slice - libcontainer container kubepods-burstable-pod78c6e2b87648942011cad28e5bbc6a7c.slice. Apr 13 19:24:10.541905 kubelet[2861]: E0413 19:24:10.538593 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-52?timeout=10s\": dial tcp 172.31.27.52:6443: connect: connection refused" interval="400ms" Apr 13 19:24:10.547652 kubelet[2861]: E0413 19:24:10.547603 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:10.559894 systemd[1]: Created slice kubepods-burstable-pod7022377983d95e53c49047c6d17daa79.slice - libcontainer container kubepods-burstable-pod7022377983d95e53c49047c6d17daa79.slice. Apr 13 19:24:10.564802 kubelet[2861]: I0413 19:24:10.564767 2861 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-52" Apr 13 19:24:10.568439 kubelet[2861]: E0413 19:24:10.568375 2861 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.52:6443/api/v1/nodes\": dial tcp 172.31.27.52:6443: connect: connection refused" node="ip-172-31-27-52" Apr 13 19:24:10.573474 kubelet[2861]: E0413 19:24:10.573400 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:10.577580 systemd[1]: Created slice kubepods-burstable-pod4c084d1301f00ef43a727049f7d0de62.slice - libcontainer container kubepods-burstable-pod4c084d1301f00ef43a727049f7d0de62.slice. Apr 13 19:24:10.582414 kubelet[2861]: E0413 19:24:10.582026 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:10.634324 kubelet[2861]: I0413 19:24:10.634254 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78c6e2b87648942011cad28e5bbc6a7c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-52\" (UID: \"78c6e2b87648942011cad28e5bbc6a7c\") " pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:10.634514 kubelet[2861]: I0413 19:24:10.634321 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:10.634514 kubelet[2861]: I0413 19:24:10.634380 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:10.634514 kubelet[2861]: I0413 19:24:10.634423 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78c6e2b87648942011cad28e5bbc6a7c-ca-certs\") pod \"kube-apiserver-ip-172-31-27-52\" (UID: \"78c6e2b87648942011cad28e5bbc6a7c\") " pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:10.634514 kubelet[2861]: I0413 19:24:10.634490 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78c6e2b87648942011cad28e5bbc6a7c-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-52\" (UID: \"78c6e2b87648942011cad28e5bbc6a7c\") " pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:10.634738 kubelet[2861]: I0413 19:24:10.634542 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:10.634738 kubelet[2861]: I0413 19:24:10.634576 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:10.634738 kubelet[2861]: I0413 19:24:10.634612 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:10.634738 kubelet[2861]: I0413 19:24:10.634680 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c084d1301f00ef43a727049f7d0de62-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-52\" (UID: \"4c084d1301f00ef43a727049f7d0de62\") " pod="kube-system/kube-scheduler-ip-172-31-27-52" Apr 13 19:24:10.771315 kubelet[2861]: I0413 19:24:10.771169 2861 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-52" Apr 13 19:24:10.772787 kubelet[2861]: E0413 19:24:10.772719 2861 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.52:6443/api/v1/nodes\": dial tcp 172.31.27.52:6443: connect: connection refused" node="ip-172-31-27-52" Apr 13 19:24:10.850084 containerd[2017]: time="2026-04-13T19:24:10.849661500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-52,Uid:78c6e2b87648942011cad28e5bbc6a7c,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:10.877320 containerd[2017]: time="2026-04-13T19:24:10.877246188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-52,Uid:7022377983d95e53c49047c6d17daa79,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:10.883851 containerd[2017]: time="2026-04-13T19:24:10.883763796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-52,Uid:4c084d1301f00ef43a727049f7d0de62,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:10.939977 kubelet[2861]: E0413 19:24:10.939919 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-52?timeout=10s\": dial tcp 172.31.27.52:6443: connect: connection refused" interval="800ms" Apr 13 19:24:11.175967 kubelet[2861]: I0413 19:24:11.175400 2861 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-52" Apr 13 19:24:11.175967 kubelet[2861]: E0413 19:24:11.175917 2861 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.52:6443/api/v1/nodes\": dial tcp 172.31.27.52:6443: connect: connection refused" node="ip-172-31-27-52" Apr 13 19:24:11.459412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877735058.mount: Deactivated successfully. Apr 13 19:24:11.474535 containerd[2017]: time="2026-04-13T19:24:11.474177491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:11.481638 containerd[2017]: time="2026-04-13T19:24:11.481555523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 19:24:11.483579 containerd[2017]: time="2026-04-13T19:24:11.483515927Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:11.486755 containerd[2017]: time="2026-04-13T19:24:11.486684491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:11.489239 containerd[2017]: time="2026-04-13T19:24:11.489171311Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:11.491800 containerd[2017]: time="2026-04-13T19:24:11.491720507Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:11.494287 containerd[2017]: time="2026-04-13T19:24:11.494067803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:11.496793 containerd[2017]: time="2026-04-13T19:24:11.496585607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:11.500691 containerd[2017]: time="2026-04-13T19:24:11.500570927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 650.749059ms" Apr 13 19:24:11.506681 containerd[2017]: time="2026-04-13T19:24:11.506510711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 629.141679ms" Apr 13 19:24:11.514513 containerd[2017]: time="2026-04-13T19:24:11.513246299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 629.134239ms" Apr 13 19:24:11.564031 kubelet[2861]: E0413 19:24:11.561861 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-52&limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:11.716303 containerd[2017]: time="2026-04-13T19:24:11.715319232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:11.721597 containerd[2017]: time="2026-04-13T19:24:11.715421076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:11.721597 containerd[2017]: time="2026-04-13T19:24:11.717409476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:11.721597 containerd[2017]: time="2026-04-13T19:24:11.717621468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:11.723653 containerd[2017]: time="2026-04-13T19:24:11.723476748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:11.723653 containerd[2017]: time="2026-04-13T19:24:11.723582096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:11.724102 containerd[2017]: time="2026-04-13T19:24:11.723620028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:11.724668 containerd[2017]: time="2026-04-13T19:24:11.724410180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:11.729778 containerd[2017]: time="2026-04-13T19:24:11.729530604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:11.730067 containerd[2017]: time="2026-04-13T19:24:11.729699048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:11.730067 containerd[2017]: time="2026-04-13T19:24:11.730032852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:11.730957 containerd[2017]: time="2026-04-13T19:24:11.730752192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:11.741295 kubelet[2861]: E0413 19:24:11.741210 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-52?timeout=10s\": dial tcp 172.31.27.52:6443: connect: connection refused" interval="1.6s" Apr 13 19:24:11.785840 systemd[1]: Started cri-containerd-2e4590f5b2123645612a8c4bbc4e17640fc25c5fc0d42c60271956f039771e32.scope - libcontainer container 2e4590f5b2123645612a8c4bbc4e17640fc25c5fc0d42c60271956f039771e32. Apr 13 19:24:11.789619 systemd[1]: Started cri-containerd-afa10edd1b2166f066631179417c7461c470caa5d594b1df0a71d2eeb566a4bf.scope - libcontainer container afa10edd1b2166f066631179417c7461c470caa5d594b1df0a71d2eeb566a4bf. Apr 13 19:24:11.792968 systemd[1]: Started cri-containerd-de81d00518230d9badff26a7b0c313d60aa13a1d47831b6b1d98aa9295f94927.scope - libcontainer container de81d00518230d9badff26a7b0c313d60aa13a1d47831b6b1d98aa9295f94927. Apr 13 19:24:11.846001 kubelet[2861]: E0413 19:24:11.845833 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:11.896020 kubelet[2861]: E0413 19:24:11.895349 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:11.907632 containerd[2017]: time="2026-04-13T19:24:11.903077917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-52,Uid:78c6e2b87648942011cad28e5bbc6a7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"afa10edd1b2166f066631179417c7461c470caa5d594b1df0a71d2eeb566a4bf\"" Apr 13 19:24:11.921630 containerd[2017]: time="2026-04-13T19:24:11.921533485Z" level=info msg="CreateContainer within sandbox \"afa10edd1b2166f066631179417c7461c470caa5d594b1df0a71d2eeb566a4bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:24:11.926619 containerd[2017]: time="2026-04-13T19:24:11.925999681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-52,Uid:4c084d1301f00ef43a727049f7d0de62,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e4590f5b2123645612a8c4bbc4e17640fc25c5fc0d42c60271956f039771e32\"" Apr 13 19:24:11.929881 containerd[2017]: time="2026-04-13T19:24:11.929811769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-52,Uid:7022377983d95e53c49047c6d17daa79,Namespace:kube-system,Attempt:0,} returns sandbox id \"de81d00518230d9badff26a7b0c313d60aa13a1d47831b6b1d98aa9295f94927\"" Apr 13 19:24:11.938795 containerd[2017]: time="2026-04-13T19:24:11.938717761Z" level=info msg="CreateContainer within sandbox \"2e4590f5b2123645612a8c4bbc4e17640fc25c5fc0d42c60271956f039771e32\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:24:11.944720 containerd[2017]: time="2026-04-13T19:24:11.944531965Z" level=info msg="CreateContainer within sandbox \"de81d00518230d9badff26a7b0c313d60aa13a1d47831b6b1d98aa9295f94927\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:24:11.973504 containerd[2017]: time="2026-04-13T19:24:11.972510205Z" level=info msg="CreateContainer within sandbox \"afa10edd1b2166f066631179417c7461c470caa5d594b1df0a71d2eeb566a4bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"35f53ccc7982bebbf7a19f858778419e5e226db0bf58b8a553d4ae182f417ead\"" Apr 13 19:24:11.978535 containerd[2017]: time="2026-04-13T19:24:11.977806657Z" level=info msg="StartContainer for \"35f53ccc7982bebbf7a19f858778419e5e226db0bf58b8a553d4ae182f417ead\"" Apr 13 19:24:11.985726 containerd[2017]: time="2026-04-13T19:24:11.985246837Z" level=info msg="CreateContainer within sandbox \"2e4590f5b2123645612a8c4bbc4e17640fc25c5fc0d42c60271956f039771e32\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd\"" Apr 13 19:24:11.990910 kubelet[2861]: I0413 19:24:11.990869 2861 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-52" Apr 13 19:24:11.991409 containerd[2017]: time="2026-04-13T19:24:11.991233805Z" level=info msg="StartContainer for \"592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd\"" Apr 13 19:24:11.992568 kubelet[2861]: E0413 19:24:11.992084 2861 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.52:6443/api/v1/nodes\": dial tcp 172.31.27.52:6443: connect: connection refused" node="ip-172-31-27-52" Apr 13 19:24:11.998812 containerd[2017]: time="2026-04-13T19:24:11.998727865Z" level=info msg="CreateContainer within sandbox \"de81d00518230d9badff26a7b0c313d60aa13a1d47831b6b1d98aa9295f94927\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06\"" Apr 13 19:24:12.000594 kubelet[2861]: E0413 19:24:12.000547 2861 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:12.001866 containerd[2017]: time="2026-04-13T19:24:12.001389573Z" level=info msg="StartContainer for \"a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06\"" Apr 13 19:24:12.044776 systemd[1]: Started cri-containerd-35f53ccc7982bebbf7a19f858778419e5e226db0bf58b8a553d4ae182f417ead.scope - libcontainer container 35f53ccc7982bebbf7a19f858778419e5e226db0bf58b8a553d4ae182f417ead. Apr 13 19:24:12.076774 systemd[1]: Started cri-containerd-a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06.scope - libcontainer container a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06. Apr 13 19:24:12.088785 systemd[1]: Started cri-containerd-592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd.scope - libcontainer container 592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd. Apr 13 19:24:12.178037 containerd[2017]: time="2026-04-13T19:24:12.177806854Z" level=info msg="StartContainer for \"35f53ccc7982bebbf7a19f858778419e5e226db0bf58b8a553d4ae182f417ead\" returns successfully" Apr 13 19:24:12.199375 containerd[2017]: time="2026-04-13T19:24:12.199258678Z" level=info msg="StartContainer for \"592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd\" returns successfully" Apr 13 19:24:12.249588 containerd[2017]: time="2026-04-13T19:24:12.247804883Z" level=info msg="StartContainer for \"a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06\" returns successfully" Apr 13 19:24:12.413240 kubelet[2861]: E0413 19:24:12.413152 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:12.426335 kubelet[2861]: E0413 19:24:12.426148 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:12.434257 kubelet[2861]: E0413 19:24:12.433957 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:13.434177 kubelet[2861]: E0413 19:24:13.434121 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:13.436263 kubelet[2861]: E0413 19:24:13.436218 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:13.597772 kubelet[2861]: I0413 19:24:13.597722 2861 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-52" Apr 13 19:24:14.936028 kubelet[2861]: E0413 19:24:14.935971 2861 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:15.065498 update_engine[1998]: I20260413 19:24:15.064497 1998 update_attempter.cc:509] Updating boot flags... Apr 13 19:24:15.196506 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3159) Apr 13 19:24:15.678477 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3161) Apr 13 19:24:17.316686 kubelet[2861]: I0413 19:24:17.316621 2861 apiserver.go:52] "Watching apiserver" Apr 13 19:24:17.432742 kubelet[2861]: I0413 19:24:17.432626 2861 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:24:17.471749 kubelet[2861]: E0413 19:24:17.471690 2861 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-52\" not found" node="ip-172-31-27-52" Apr 13 19:24:17.598783 kubelet[2861]: I0413 19:24:17.598629 2861 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-52" Apr 13 19:24:17.638480 kubelet[2861]: I0413 19:24:17.636351 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:17.786124 kubelet[2861]: E0413 19:24:17.786065 2861 kubelet.go:3311] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:17.786286 kubelet[2861]: I0413 19:24:17.786117 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:17.839667 kubelet[2861]: E0413 19:24:17.839602 2861 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-27-52\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:17.839849 kubelet[2861]: I0413 19:24:17.839677 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-52" Apr 13 19:24:17.852011 kubelet[2861]: E0413 19:24:17.851847 2861 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-52\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-52" Apr 13 19:24:19.670256 systemd[1]: Reloading requested from client PID 3331 ('systemctl') (unit session-7.scope)... Apr 13 19:24:19.670287 systemd[1]: Reloading... Apr 13 19:24:19.856491 zram_generator::config[3380]: No configuration found. Apr 13 19:24:20.110335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:20.335877 systemd[1]: Reloading finished in 664 ms. Apr 13 19:24:20.377540 kubelet[2861]: I0413 19:24:20.377379 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:20.434119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:20.454405 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:24:20.456555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:20.456787 systemd[1]: kubelet.service: Consumed 1.455s CPU time, 129.8M memory peak, 0B memory swap peak. Apr 13 19:24:20.463988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:20.816030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:20.837183 (kubelet)[3431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:20.932846 kubelet[3431]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:20.932846 kubelet[3431]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:20.932846 kubelet[3431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:20.933433 kubelet[3431]: I0413 19:24:20.932952 3431 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:20.952838 kubelet[3431]: I0413 19:24:20.952617 3431 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:24:20.952838 kubelet[3431]: I0413 19:24:20.952671 3431 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:20.953095 kubelet[3431]: I0413 19:24:20.953060 3431 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:20.956046 kubelet[3431]: I0413 19:24:20.955636 3431 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:24:20.961497 kubelet[3431]: I0413 19:24:20.959843 3431 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:20.972474 kubelet[3431]: E0413 19:24:20.972401 3431 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:20.972474 kubelet[3431]: I0413 19:24:20.972479 3431 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:20.978052 kubelet[3431]: I0413 19:24:20.977992 3431 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:24:20.978621 kubelet[3431]: I0413 19:24:20.978563 3431 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:20.978938 kubelet[3431]: I0413 19:24:20.978618 3431 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-52","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:24:20.979105 kubelet[3431]: I0413 19:24:20.978937 3431 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:20.979105 kubelet[3431]: I0413 19:24:20.978958 3431 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:24:20.979105 kubelet[3431]: I0413 19:24:20.979047 3431 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:20.980251 kubelet[3431]: I0413 19:24:20.979351 3431 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:24:20.980251 kubelet[3431]: I0413 19:24:20.979382 3431 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:20.980251 kubelet[3431]: I0413 19:24:20.979428 3431 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:24:20.980251 kubelet[3431]: I0413 19:24:20.979536 3431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:20.998497 kubelet[3431]: I0413 19:24:20.997713 3431 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:21.000213 kubelet[3431]: I0413 19:24:21.000177 3431 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:21.017565 kubelet[3431]: I0413 19:24:21.017014 3431 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:24:21.017846 kubelet[3431]: I0413 19:24:21.017822 3431 server.go:1289] "Started kubelet" Apr 13 19:24:21.023062 kubelet[3431]: I0413 19:24:21.023024 3431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:21.040284 kubelet[3431]: I0413 19:24:21.038964 3431 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:21.043241 kubelet[3431]: I0413 19:24:21.041935 3431 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:21.054035 kubelet[3431]: I0413 19:24:21.053052 3431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:21.055155 kubelet[3431]: I0413 19:24:21.055108 3431 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:24:21.060299 kubelet[3431]: I0413 19:24:21.060249 3431 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:24:21.063749 kubelet[3431]: I0413 19:24:21.063698 3431 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:21.067054 kubelet[3431]: I0413 19:24:21.067003 3431 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:24:21.071540 kubelet[3431]: I0413 19:24:21.071224 3431 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:21.076248 kubelet[3431]: I0413 19:24:21.076199 3431 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:24:21.087352 kubelet[3431]: E0413 19:24:21.086978 3431 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:21.095768 kubelet[3431]: I0413 19:24:21.095222 3431 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:21.095768 kubelet[3431]: I0413 19:24:21.095263 3431 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:21.116743 kubelet[3431]: I0413 19:24:21.116646 3431 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:21.120974 kubelet[3431]: I0413 19:24:21.120895 3431 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:21.120974 kubelet[3431]: I0413 19:24:21.120961 3431 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:24:21.121186 kubelet[3431]: I0413 19:24:21.121010 3431 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:21.121186 kubelet[3431]: I0413 19:24:21.121026 3431 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:24:21.121186 kubelet[3431]: E0413 19:24:21.121094 3431 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:21.221418 kubelet[3431]: I0413 19:24:21.221371 3431 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:21.221418 kubelet[3431]: I0413 19:24:21.221406 3431 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:21.221670 kubelet[3431]: I0413 19:24:21.221472 3431 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:21.222594 kubelet[3431]: I0413 19:24:21.221746 3431 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:24:21.222594 kubelet[3431]: I0413 19:24:21.221781 3431 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:24:21.222594 kubelet[3431]: I0413 19:24:21.221815 3431 policy_none.go:49] "None policy: Start" Apr 13 19:24:21.222594 kubelet[3431]: I0413 19:24:21.221834 3431 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:24:21.222594 kubelet[3431]: I0413 19:24:21.221855 3431 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:24:21.222594 kubelet[3431]: I0413 19:24:21.222056 3431 state_mem.go:75] "Updated machine memory state" Apr 13 19:24:21.222594 kubelet[3431]: E0413 19:24:21.222528 3431 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 19:24:21.232537 kubelet[3431]: E0413 19:24:21.232442 3431 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:21.234247 kubelet[3431]: I0413 19:24:21.233689 3431 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:21.234890 kubelet[3431]: I0413 19:24:21.234418 3431 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:21.235869 kubelet[3431]: I0413 19:24:21.235697 3431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:21.242286 kubelet[3431]: E0413 19:24:21.242108 3431 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:21.348597 kubelet[3431]: I0413 19:24:21.345803 3431 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-52" Apr 13 19:24:21.363496 kubelet[3431]: I0413 19:24:21.363012 3431 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-27-52" Apr 13 19:24:21.363496 kubelet[3431]: I0413 19:24:21.363142 3431 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-52" Apr 13 19:24:21.425429 kubelet[3431]: I0413 19:24:21.423348 3431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:21.425429 kubelet[3431]: I0413 19:24:21.424040 3431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:21.426399 kubelet[3431]: I0413 19:24:21.426068 3431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-52" Apr 13 19:24:21.438052 kubelet[3431]: E0413 19:24:21.437933 3431 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-52\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:21.478999 kubelet[3431]: I0413 19:24:21.478934 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:21.479276 kubelet[3431]: I0413 19:24:21.479230 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:21.479374 kubelet[3431]: I0413 19:24:21.479309 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78c6e2b87648942011cad28e5bbc6a7c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-52\" (UID: \"78c6e2b87648942011cad28e5bbc6a7c\") " pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:21.479374 kubelet[3431]: I0413 19:24:21.479354 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:21.479532 kubelet[3431]: I0413 19:24:21.479405 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:21.479532 kubelet[3431]: I0413 19:24:21.479516 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c084d1301f00ef43a727049f7d0de62-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-52\" (UID: \"4c084d1301f00ef43a727049f7d0de62\") " pod="kube-system/kube-scheduler-ip-172-31-27-52" Apr 13 19:24:21.479632 kubelet[3431]: I0413 19:24:21.479559 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78c6e2b87648942011cad28e5bbc6a7c-ca-certs\") pod \"kube-apiserver-ip-172-31-27-52\" (UID: \"78c6e2b87648942011cad28e5bbc6a7c\") " pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:21.479632 kubelet[3431]: I0413 19:24:21.479596 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78c6e2b87648942011cad28e5bbc6a7c-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-52\" (UID: \"78c6e2b87648942011cad28e5bbc6a7c\") " pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:21.480518 kubelet[3431]: I0413 19:24:21.479631 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7022377983d95e53c49047c6d17daa79-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-52\" (UID: \"7022377983d95e53c49047c6d17daa79\") " pod="kube-system/kube-controller-manager-ip-172-31-27-52" Apr 13 19:24:21.985690 kubelet[3431]: I0413 19:24:21.985318 3431 apiserver.go:52] "Watching apiserver" Apr 13 19:24:22.062481 kubelet[3431]: I0413 19:24:22.061187 3431 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:24:22.181115 kubelet[3431]: I0413 19:24:22.180822 3431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:22.195216 kubelet[3431]: E0413 19:24:22.195071 3431 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-52\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-52" Apr 13 19:24:22.241605 kubelet[3431]: I0413 19:24:22.239979 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-52" podStartSLOduration=2.239957048 podStartE2EDuration="2.239957048s" podCreationTimestamp="2026-04-13 19:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:22.218357216 +0000 UTC m=+1.372295936" watchObservedRunningTime="2026-04-13 19:24:22.239957048 +0000 UTC m=+1.393895756" Apr 13 19:24:22.259003 kubelet[3431]: I0413 19:24:22.258929 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-52" podStartSLOduration=1.25890842 podStartE2EDuration="1.25890842s" podCreationTimestamp="2026-04-13 19:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:22.240410012 +0000 UTC m=+1.394348720" watchObservedRunningTime="2026-04-13 19:24:22.25890842 +0000 UTC m=+1.412847128" Apr 13 19:24:27.118476 kubelet[3431]: I0413 19:24:27.118410 3431 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:24:27.119232 containerd[2017]: time="2026-04-13T19:24:27.118933597Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:24:27.119782 kubelet[3431]: I0413 19:24:27.119306 3431 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:24:28.014407 kubelet[3431]: I0413 19:24:28.014307 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-52" podStartSLOduration=7.014284081 podStartE2EDuration="7.014284081s" podCreationTimestamp="2026-04-13 19:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:22.25965164 +0000 UTC m=+1.413590348" watchObservedRunningTime="2026-04-13 19:24:28.014284081 +0000 UTC m=+7.168222789" Apr 13 19:24:28.040972 systemd[1]: Created slice kubepods-besteffort-pod4a402a71_4252_4aa1_9eb3_a837533ff5fc.slice - libcontainer container kubepods-besteffort-pod4a402a71_4252_4aa1_9eb3_a837533ff5fc.slice. Apr 13 19:24:28.121680 kubelet[3431]: I0413 19:24:28.121496 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a402a71-4252-4aa1-9eb3-a837533ff5fc-xtables-lock\") pod \"kube-proxy-gd4ss\" (UID: \"4a402a71-4252-4aa1-9eb3-a837533ff5fc\") " pod="kube-system/kube-proxy-gd4ss" Apr 13 19:24:28.121680 kubelet[3431]: I0413 19:24:28.121569 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdbsp\" (UniqueName: \"kubernetes.io/projected/4a402a71-4252-4aa1-9eb3-a837533ff5fc-kube-api-access-kdbsp\") pod \"kube-proxy-gd4ss\" (UID: \"4a402a71-4252-4aa1-9eb3-a837533ff5fc\") " pod="kube-system/kube-proxy-gd4ss" Apr 13 19:24:28.121680 kubelet[3431]: I0413 19:24:28.121632 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a402a71-4252-4aa1-9eb3-a837533ff5fc-kube-proxy\") pod \"kube-proxy-gd4ss\" (UID: \"4a402a71-4252-4aa1-9eb3-a837533ff5fc\") " pod="kube-system/kube-proxy-gd4ss" Apr 13 19:24:28.122505 kubelet[3431]: I0413 19:24:28.121690 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a402a71-4252-4aa1-9eb3-a837533ff5fc-lib-modules\") pod \"kube-proxy-gd4ss\" (UID: \"4a402a71-4252-4aa1-9eb3-a837533ff5fc\") " pod="kube-system/kube-proxy-gd4ss" Apr 13 19:24:28.341310 systemd[1]: Created slice kubepods-besteffort-podb308fb24_da5f_448a_b4a8_f1cbfc170e84.slice - libcontainer container kubepods-besteffort-podb308fb24_da5f_448a_b4a8_f1cbfc170e84.slice. Apr 13 19:24:28.356980 containerd[2017]: time="2026-04-13T19:24:28.356909259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gd4ss,Uid:4a402a71-4252-4aa1-9eb3-a837533ff5fc,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:28.411951 containerd[2017]: time="2026-04-13T19:24:28.410878299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:28.411951 containerd[2017]: time="2026-04-13T19:24:28.410981619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:28.411951 containerd[2017]: time="2026-04-13T19:24:28.411022575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.411951 containerd[2017]: time="2026-04-13T19:24:28.411389067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.424517 kubelet[3431]: I0413 19:24:28.423437 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b308fb24-da5f-448a-b4a8-f1cbfc170e84-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-lh6z7\" (UID: \"b308fb24-da5f-448a-b4a8-f1cbfc170e84\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lh6z7" Apr 13 19:24:28.424517 kubelet[3431]: I0413 19:24:28.423533 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2w7\" (UniqueName: \"kubernetes.io/projected/b308fb24-da5f-448a-b4a8-f1cbfc170e84-kube-api-access-xr2w7\") pod \"tigera-operator-6bf85f8dd-lh6z7\" (UID: \"b308fb24-da5f-448a-b4a8-f1cbfc170e84\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lh6z7" Apr 13 19:24:28.454808 systemd[1]: Started cri-containerd-a5af59a4aad2590204cf056f322df5ac69d013403f7053a6de8c1befdf1465fa.scope - libcontainer container a5af59a4aad2590204cf056f322df5ac69d013403f7053a6de8c1befdf1465fa. Apr 13 19:24:28.507617 containerd[2017]: time="2026-04-13T19:24:28.507551235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gd4ss,Uid:4a402a71-4252-4aa1-9eb3-a837533ff5fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5af59a4aad2590204cf056f322df5ac69d013403f7053a6de8c1befdf1465fa\"" Apr 13 19:24:28.517399 containerd[2017]: time="2026-04-13T19:24:28.517329639Z" level=info msg="CreateContainer within sandbox \"a5af59a4aad2590204cf056f322df5ac69d013403f7053a6de8c1befdf1465fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:24:28.552425 containerd[2017]: time="2026-04-13T19:24:28.552343516Z" level=info msg="CreateContainer within sandbox \"a5af59a4aad2590204cf056f322df5ac69d013403f7053a6de8c1befdf1465fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c5417f7593cd10a74f8666d011d6df491c449f0a171a58b219a4ec78c5f514f\"" Apr 13 19:24:28.553577 containerd[2017]: time="2026-04-13T19:24:28.553512772Z" level=info msg="StartContainer for \"3c5417f7593cd10a74f8666d011d6df491c449f0a171a58b219a4ec78c5f514f\"" Apr 13 19:24:28.599815 systemd[1]: Started cri-containerd-3c5417f7593cd10a74f8666d011d6df491c449f0a171a58b219a4ec78c5f514f.scope - libcontainer container 3c5417f7593cd10a74f8666d011d6df491c449f0a171a58b219a4ec78c5f514f. Apr 13 19:24:28.647965 containerd[2017]: time="2026-04-13T19:24:28.647727544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lh6z7,Uid:b308fb24-da5f-448a-b4a8-f1cbfc170e84,Namespace:tigera-operator,Attempt:0,}" Apr 13 19:24:28.660476 containerd[2017]: time="2026-04-13T19:24:28.660280384Z" level=info msg="StartContainer for \"3c5417f7593cd10a74f8666d011d6df491c449f0a171a58b219a4ec78c5f514f\" returns successfully" Apr 13 19:24:28.701070 containerd[2017]: time="2026-04-13T19:24:28.700689664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:28.701070 containerd[2017]: time="2026-04-13T19:24:28.700980568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:28.704036 containerd[2017]: time="2026-04-13T19:24:28.703739236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.704494 containerd[2017]: time="2026-04-13T19:24:28.704288380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.764014 systemd[1]: Started cri-containerd-492cc0b4dbf7f13661530ac750adf45dcc55c3e27e041cb4d60325b7af7474e1.scope - libcontainer container 492cc0b4dbf7f13661530ac750adf45dcc55c3e27e041cb4d60325b7af7474e1. Apr 13 19:24:28.842046 containerd[2017]: time="2026-04-13T19:24:28.841967621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lh6z7,Uid:b308fb24-da5f-448a-b4a8-f1cbfc170e84,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"492cc0b4dbf7f13661530ac750adf45dcc55c3e27e041cb4d60325b7af7474e1\"" Apr 13 19:24:28.847984 containerd[2017]: time="2026-04-13T19:24:28.847518341Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 19:24:30.295691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426306653.mount: Deactivated successfully. Apr 13 19:24:31.061971 containerd[2017]: time="2026-04-13T19:24:31.061760512Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:31.063910 containerd[2017]: time="2026-04-13T19:24:31.063809092Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 19:24:31.066233 containerd[2017]: time="2026-04-13T19:24:31.066152020Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:31.073501 containerd[2017]: time="2026-04-13T19:24:31.072962740Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:31.074945 containerd[2017]: time="2026-04-13T19:24:31.074652844Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.227069679s" Apr 13 19:24:31.074945 containerd[2017]: time="2026-04-13T19:24:31.074718172Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 19:24:31.085868 containerd[2017]: time="2026-04-13T19:24:31.085541716Z" level=info msg="CreateContainer within sandbox \"492cc0b4dbf7f13661530ac750adf45dcc55c3e27e041cb4d60325b7af7474e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 19:24:31.119417 containerd[2017]: time="2026-04-13T19:24:31.119209216Z" level=info msg="CreateContainer within sandbox \"492cc0b4dbf7f13661530ac750adf45dcc55c3e27e041cb4d60325b7af7474e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd\"" Apr 13 19:24:31.123100 containerd[2017]: time="2026-04-13T19:24:31.121796968Z" level=info msg="StartContainer for \"4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd\"" Apr 13 19:24:31.179799 systemd[1]: Started cri-containerd-4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd.scope - libcontainer container 4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd. Apr 13 19:24:31.239364 containerd[2017]: time="2026-04-13T19:24:31.237498113Z" level=info msg="StartContainer for \"4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd\" returns successfully" Apr 13 19:24:31.271153 kubelet[3431]: I0413 19:24:31.270958 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gd4ss" podStartSLOduration=4.270930653 podStartE2EDuration="4.270930653s" podCreationTimestamp="2026-04-13 19:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:29.246371175 +0000 UTC m=+8.400309883" watchObservedRunningTime="2026-04-13 19:24:31.270930653 +0000 UTC m=+10.424869457" Apr 13 19:24:39.927352 sudo[2336]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:40.090969 sshd[2333]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:40.103213 systemd[1]: sshd@6-172.31.27.52:22-4.175.71.9:36554.service: Deactivated successfully. Apr 13 19:24:40.103535 systemd-logind[1997]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:24:40.110397 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:24:40.114471 systemd[1]: session-7.scope: Consumed 12.204s CPU time, 155.2M memory peak, 0B memory swap peak. Apr 13 19:24:40.118757 systemd-logind[1997]: Removed session 7. Apr 13 19:24:49.228734 kubelet[3431]: I0413 19:24:49.228616 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-lh6z7" podStartSLOduration=18.997729307 podStartE2EDuration="21.228594334s" podCreationTimestamp="2026-04-13 19:24:28 +0000 UTC" firstStartedPulling="2026-04-13 19:24:28.846322073 +0000 UTC m=+8.000260781" lastFinishedPulling="2026-04-13 19:24:31.077187112 +0000 UTC m=+10.231125808" observedRunningTime="2026-04-13 19:24:32.259809726 +0000 UTC m=+11.413748422" watchObservedRunningTime="2026-04-13 19:24:49.228594334 +0000 UTC m=+28.382533030" Apr 13 19:24:49.252416 systemd[1]: Created slice kubepods-besteffort-poddee0f259_ff00_4696_bfeb_eb2dfff9d899.slice - libcontainer container kubepods-besteffort-poddee0f259_ff00_4696_bfeb_eb2dfff9d899.slice. Apr 13 19:24:49.269481 kubelet[3431]: I0413 19:24:49.268792 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dee0f259-ff00-4696-bfeb-eb2dfff9d899-typha-certs\") pod \"calico-typha-599fc4f64c-85vlk\" (UID: \"dee0f259-ff00-4696-bfeb-eb2dfff9d899\") " pod="calico-system/calico-typha-599fc4f64c-85vlk" Apr 13 19:24:49.269956 kubelet[3431]: I0413 19:24:49.269801 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8hn9\" (UniqueName: \"kubernetes.io/projected/dee0f259-ff00-4696-bfeb-eb2dfff9d899-kube-api-access-j8hn9\") pod \"calico-typha-599fc4f64c-85vlk\" (UID: \"dee0f259-ff00-4696-bfeb-eb2dfff9d899\") " pod="calico-system/calico-typha-599fc4f64c-85vlk" Apr 13 19:24:49.269956 kubelet[3431]: I0413 19:24:49.269874 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dee0f259-ff00-4696-bfeb-eb2dfff9d899-tigera-ca-bundle\") pod \"calico-typha-599fc4f64c-85vlk\" (UID: \"dee0f259-ff00-4696-bfeb-eb2dfff9d899\") " pod="calico-system/calico-typha-599fc4f64c-85vlk" Apr 13 19:24:49.526406 systemd[1]: Created slice kubepods-besteffort-podc86e1ba7_5f4e_4fc9_81da_a46c76ced039.slice - libcontainer container kubepods-besteffort-podc86e1ba7_5f4e_4fc9_81da_a46c76ced039.slice. Apr 13 19:24:49.561235 containerd[2017]: time="2026-04-13T19:24:49.560843412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599fc4f64c-85vlk,Uid:dee0f259-ff00-4696-bfeb-eb2dfff9d899,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:49.572732 kubelet[3431]: I0413 19:24:49.571876 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-policysync\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.572732 kubelet[3431]: I0413 19:24:49.571945 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-cni-net-dir\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.572732 kubelet[3431]: I0413 19:24:49.571986 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-flexvol-driver-host\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.572732 kubelet[3431]: I0413 19:24:49.572039 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-lib-modules\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.572732 kubelet[3431]: I0413 19:24:49.572095 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-nodeproc\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573104 kubelet[3431]: I0413 19:24:49.572133 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-var-lib-calico\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573104 kubelet[3431]: I0413 19:24:49.572208 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-bpffs\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573104 kubelet[3431]: I0413 19:24:49.572266 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-sys-fs\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573104 kubelet[3431]: I0413 19:24:49.572302 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2t42\" (UniqueName: \"kubernetes.io/projected/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-kube-api-access-w2t42\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573104 kubelet[3431]: I0413 19:24:49.572350 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-cni-bin-dir\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573411 kubelet[3431]: I0413 19:24:49.572383 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-node-certs\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573411 kubelet[3431]: I0413 19:24:49.572436 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-cni-log-dir\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573411 kubelet[3431]: I0413 19:24:49.572501 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-tigera-ca-bundle\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573411 kubelet[3431]: I0413 19:24:49.572537 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-var-run-calico\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.573411 kubelet[3431]: I0413 19:24:49.572597 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c86e1ba7-5f4e-4fc9-81da-a46c76ced039-xtables-lock\") pod \"calico-node-qjr8g\" (UID: \"c86e1ba7-5f4e-4fc9-81da-a46c76ced039\") " pod="calico-system/calico-node-qjr8g" Apr 13 19:24:49.633347 containerd[2017]: time="2026-04-13T19:24:49.633190356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:49.634527 containerd[2017]: time="2026-04-13T19:24:49.634268616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:49.636606 containerd[2017]: time="2026-04-13T19:24:49.634720860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:49.638092 containerd[2017]: time="2026-04-13T19:24:49.637927284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:49.683484 kubelet[3431]: E0413 19:24:49.682552 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.683484 kubelet[3431]: W0413 19:24:49.682641 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.683484 kubelet[3431]: E0413 19:24:49.682690 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.685201 kubelet[3431]: E0413 19:24:49.684864 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.685201 kubelet[3431]: W0413 19:24:49.684927 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.685201 kubelet[3431]: E0413 19:24:49.684962 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.693770 kubelet[3431]: E0413 19:24:49.693717 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.693770 kubelet[3431]: W0413 19:24:49.693758 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.694080 kubelet[3431]: E0413 19:24:49.693790 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.694326 kubelet[3431]: E0413 19:24:49.694291 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.694326 kubelet[3431]: W0413 19:24:49.694320 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.695610 kubelet[3431]: E0413 19:24:49.694373 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.695610 kubelet[3431]: E0413 19:24:49.695550 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.695610 kubelet[3431]: W0413 19:24:49.695574 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.695610 kubelet[3431]: E0413 19:24:49.695597 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.700017 kubelet[3431]: E0413 19:24:49.699958 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.700162 kubelet[3431]: W0413 19:24:49.700019 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.700162 kubelet[3431]: E0413 19:24:49.700057 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.702974 kubelet[3431]: E0413 19:24:49.702924 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.702974 kubelet[3431]: W0413 19:24:49.702963 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.703351 kubelet[3431]: E0413 19:24:49.702995 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.708597 kubelet[3431]: E0413 19:24:49.707389 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.708597 kubelet[3431]: W0413 19:24:49.707484 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.708597 kubelet[3431]: E0413 19:24:49.707544 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.709751 kubelet[3431]: E0413 19:24:49.709696 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.709751 kubelet[3431]: W0413 19:24:49.709735 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.710026 kubelet[3431]: E0413 19:24:49.709769 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.710979 kubelet[3431]: E0413 19:24:49.710930 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.711699 kubelet[3431]: W0413 19:24:49.711629 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.712051 kubelet[3431]: E0413 19:24:49.711704 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.712874 kubelet[3431]: E0413 19:24:49.712826 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.712874 kubelet[3431]: W0413 19:24:49.712868 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.713232 kubelet[3431]: E0413 19:24:49.712914 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.715384 kubelet[3431]: E0413 19:24:49.715310 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.715384 kubelet[3431]: W0413 19:24:49.715350 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.715384 kubelet[3431]: E0413 19:24:49.715383 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.716796 kubelet[3431]: E0413 19:24:49.716747 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.716796 kubelet[3431]: W0413 19:24:49.716786 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.717151 kubelet[3431]: E0413 19:24:49.716819 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.718121 kubelet[3431]: E0413 19:24:49.718073 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.718121 kubelet[3431]: W0413 19:24:49.718110 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.719417 kubelet[3431]: E0413 19:24:49.718143 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.722084 kubelet[3431]: E0413 19:24:49.721734 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.722084 kubelet[3431]: W0413 19:24:49.721776 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.722084 kubelet[3431]: E0413 19:24:49.721816 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.724309 kubelet[3431]: E0413 19:24:49.724103 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.724309 kubelet[3431]: W0413 19:24:49.724149 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.724309 kubelet[3431]: E0413 19:24:49.724182 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.726980 kubelet[3431]: E0413 19:24:49.726878 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.726980 kubelet[3431]: W0413 19:24:49.726919 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.726980 kubelet[3431]: E0413 19:24:49.726954 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.728868 kubelet[3431]: E0413 19:24:49.728674 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.728868 kubelet[3431]: W0413 19:24:49.728714 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.728868 kubelet[3431]: E0413 19:24:49.728748 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.731329 kubelet[3431]: E0413 19:24:49.730636 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.731329 kubelet[3431]: W0413 19:24:49.730676 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.731329 kubelet[3431]: E0413 19:24:49.730714 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.734521 kubelet[3431]: E0413 19:24:49.734211 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.734521 kubelet[3431]: W0413 19:24:49.734251 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.734521 kubelet[3431]: E0413 19:24:49.734287 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.738099 kubelet[3431]: E0413 19:24:49.736194 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.738099 kubelet[3431]: W0413 19:24:49.737583 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.738099 kubelet[3431]: E0413 19:24:49.737620 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.741929 kubelet[3431]: E0413 19:24:49.741057 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.741929 kubelet[3431]: W0413 19:24:49.741101 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.741929 kubelet[3431]: E0413 19:24:49.741133 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.742787 kubelet[3431]: E0413 19:24:49.742729 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.742787 kubelet[3431]: W0413 19:24:49.742766 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.742969 kubelet[3431]: E0413 19:24:49.742800 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.751102 kubelet[3431]: E0413 19:24:49.750333 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:24:49.757884 systemd[1]: Started cri-containerd-eaceb40dc9673e6dbd8ffcb1061eb95d0420862ef098687bbeef84a9039e268b.scope - libcontainer container eaceb40dc9673e6dbd8ffcb1061eb95d0420862ef098687bbeef84a9039e268b. Apr 13 19:24:49.801520 kubelet[3431]: E0413 19:24:49.799711 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.801520 kubelet[3431]: W0413 19:24:49.799757 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.801520 kubelet[3431]: E0413 19:24:49.799792 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.840481 containerd[2017]: time="2026-04-13T19:24:49.839609401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qjr8g,Uid:c86e1ba7-5f4e-4fc9-81da-a46c76ced039,Namespace:calico-system,Attempt:0,}" Apr 13 19:24:49.841881 kubelet[3431]: E0413 19:24:49.841370 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.841881 kubelet[3431]: W0413 19:24:49.841403 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.841881 kubelet[3431]: E0413 19:24:49.841596 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.842556 kubelet[3431]: E0413 19:24:49.842414 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.843341 kubelet[3431]: W0413 19:24:49.842754 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.843341 kubelet[3431]: E0413 19:24:49.842959 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.844702 kubelet[3431]: E0413 19:24:49.844150 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.844702 kubelet[3431]: W0413 19:24:49.844285 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.844702 kubelet[3431]: E0413 19:24:49.844320 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.845888 kubelet[3431]: E0413 19:24:49.845840 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.846349 kubelet[3431]: W0413 19:24:49.846030 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.846349 kubelet[3431]: E0413 19:24:49.846071 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.848322 kubelet[3431]: E0413 19:24:49.848070 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.848322 kubelet[3431]: W0413 19:24:49.848105 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.848322 kubelet[3431]: E0413 19:24:49.848138 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.849383 kubelet[3431]: E0413 19:24:49.849344 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.850023 kubelet[3431]: W0413 19:24:49.849751 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.850023 kubelet[3431]: E0413 19:24:49.849799 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.850805 kubelet[3431]: E0413 19:24:49.850644 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.851878 kubelet[3431]: W0413 19:24:49.851587 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.851878 kubelet[3431]: E0413 19:24:49.851656 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.853510 kubelet[3431]: E0413 19:24:49.852892 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.853510 kubelet[3431]: W0413 19:24:49.853051 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.853510 kubelet[3431]: E0413 19:24:49.853098 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.858478 kubelet[3431]: E0413 19:24:49.857730 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.858478 kubelet[3431]: W0413 19:24:49.857766 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.858478 kubelet[3431]: E0413 19:24:49.857799 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.859236 kubelet[3431]: E0413 19:24:49.858688 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.859236 kubelet[3431]: W0413 19:24:49.858728 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.859236 kubelet[3431]: E0413 19:24:49.858758 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.860688 kubelet[3431]: E0413 19:24:49.860650 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.860883 kubelet[3431]: W0413 19:24:49.860855 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.861311 kubelet[3431]: E0413 19:24:49.860976 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.862626 kubelet[3431]: E0413 19:24:49.862588 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.863258 kubelet[3431]: W0413 19:24:49.862827 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.863258 kubelet[3431]: E0413 19:24:49.862868 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.864097 kubelet[3431]: E0413 19:24:49.864061 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.865055 kubelet[3431]: W0413 19:24:49.864510 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.865055 kubelet[3431]: E0413 19:24:49.864574 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.867942 kubelet[3431]: E0413 19:24:49.867637 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.867942 kubelet[3431]: W0413 19:24:49.867676 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.867942 kubelet[3431]: E0413 19:24:49.867726 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.869685 kubelet[3431]: E0413 19:24:49.869647 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.869912 kubelet[3431]: W0413 19:24:49.869884 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.870269 kubelet[3431]: E0413 19:24:49.870048 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.871886 kubelet[3431]: E0413 19:24:49.871635 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.871886 kubelet[3431]: W0413 19:24:49.871669 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.871886 kubelet[3431]: E0413 19:24:49.871700 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.872439 kubelet[3431]: E0413 19:24:49.872411 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.873661 kubelet[3431]: W0413 19:24:49.872728 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.873661 kubelet[3431]: E0413 19:24:49.872797 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.874486 kubelet[3431]: E0413 19:24:49.874182 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.874486 kubelet[3431]: W0413 19:24:49.874214 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.874486 kubelet[3431]: E0413 19:24:49.874250 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.876526 kubelet[3431]: E0413 19:24:49.876241 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.876526 kubelet[3431]: W0413 19:24:49.876275 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.876526 kubelet[3431]: E0413 19:24:49.876305 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.877563 kubelet[3431]: E0413 19:24:49.877213 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.877563 kubelet[3431]: W0413 19:24:49.877245 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.877563 kubelet[3431]: E0413 19:24:49.877277 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.881098 kubelet[3431]: E0413 19:24:49.880858 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.881098 kubelet[3431]: W0413 19:24:49.880890 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.881098 kubelet[3431]: E0413 19:24:49.880923 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.881098 kubelet[3431]: I0413 19:24:49.880977 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90202ccf-b846-4ffb-bfc4-994f0a0246ae-socket-dir\") pod \"csi-node-driver-hrfts\" (UID: \"90202ccf-b846-4ffb-bfc4-994f0a0246ae\") " pod="calico-system/csi-node-driver-hrfts" Apr 13 19:24:49.881949 kubelet[3431]: E0413 19:24:49.881779 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.881949 kubelet[3431]: W0413 19:24:49.881810 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.881949 kubelet[3431]: E0413 19:24:49.881839 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.881949 kubelet[3431]: I0413 19:24:49.881894 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90202ccf-b846-4ffb-bfc4-994f0a0246ae-kubelet-dir\") pod \"csi-node-driver-hrfts\" (UID: \"90202ccf-b846-4ffb-bfc4-994f0a0246ae\") " pod="calico-system/csi-node-driver-hrfts" Apr 13 19:24:49.882593 kubelet[3431]: E0413 19:24:49.882555 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.882593 kubelet[3431]: W0413 19:24:49.882588 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.882840 kubelet[3431]: E0413 19:24:49.882615 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.885556 kubelet[3431]: E0413 19:24:49.883967 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.885556 kubelet[3431]: W0413 19:24:49.884005 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.885556 kubelet[3431]: E0413 19:24:49.884035 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.886483 kubelet[3431]: E0413 19:24:49.886217 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.886483 kubelet[3431]: W0413 19:24:49.886255 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.886483 kubelet[3431]: E0413 19:24:49.886287 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.886483 kubelet[3431]: I0413 19:24:49.886341 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5nm\" (UniqueName: \"kubernetes.io/projected/90202ccf-b846-4ffb-bfc4-994f0a0246ae-kube-api-access-ws5nm\") pod \"csi-node-driver-hrfts\" (UID: \"90202ccf-b846-4ffb-bfc4-994f0a0246ae\") " pod="calico-system/csi-node-driver-hrfts" Apr 13 19:24:49.889967 kubelet[3431]: E0413 19:24:49.889907 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.889967 kubelet[3431]: W0413 19:24:49.889952 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.890358 kubelet[3431]: E0413 19:24:49.889986 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.890358 kubelet[3431]: I0413 19:24:49.890161 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90202ccf-b846-4ffb-bfc4-994f0a0246ae-registration-dir\") pod \"csi-node-driver-hrfts\" (UID: \"90202ccf-b846-4ffb-bfc4-994f0a0246ae\") " pod="calico-system/csi-node-driver-hrfts" Apr 13 19:24:49.892139 kubelet[3431]: E0413 19:24:49.892086 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.892139 kubelet[3431]: W0413 19:24:49.892126 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.892420 kubelet[3431]: E0413 19:24:49.892160 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.895323 kubelet[3431]: E0413 19:24:49.895266 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.895323 kubelet[3431]: W0413 19:24:49.895311 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.896691 kubelet[3431]: E0413 19:24:49.895345 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.897344 kubelet[3431]: E0413 19:24:49.897096 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.897760 kubelet[3431]: W0413 19:24:49.897376 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.897760 kubelet[3431]: E0413 19:24:49.897416 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.899281 kubelet[3431]: I0413 19:24:49.898643 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/90202ccf-b846-4ffb-bfc4-994f0a0246ae-varrun\") pod \"csi-node-driver-hrfts\" (UID: \"90202ccf-b846-4ffb-bfc4-994f0a0246ae\") " pod="calico-system/csi-node-driver-hrfts" Apr 13 19:24:49.899281 kubelet[3431]: E0413 19:24:49.899089 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.899281 kubelet[3431]: W0413 19:24:49.899115 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.899281 kubelet[3431]: E0413 19:24:49.899145 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.902907 kubelet[3431]: E0413 19:24:49.902346 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.902907 kubelet[3431]: W0413 19:24:49.902385 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.902907 kubelet[3431]: E0413 19:24:49.902418 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.907105 kubelet[3431]: E0413 19:24:49.906323 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.907105 kubelet[3431]: W0413 19:24:49.906355 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.907105 kubelet[3431]: E0413 19:24:49.906386 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.909812 kubelet[3431]: E0413 19:24:49.907576 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.909812 kubelet[3431]: W0413 19:24:49.907609 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.909812 kubelet[3431]: E0413 19:24:49.907639 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.911953 kubelet[3431]: E0413 19:24:49.911625 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.911953 kubelet[3431]: W0413 19:24:49.911661 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.911953 kubelet[3431]: E0413 19:24:49.911693 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.913969 kubelet[3431]: E0413 19:24:49.913904 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:49.913969 kubelet[3431]: W0413 19:24:49.913949 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:49.914236 kubelet[3431]: E0413 19:24:49.913983 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:49.931218 containerd[2017]: time="2026-04-13T19:24:49.930182534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:49.931218 containerd[2017]: time="2026-04-13T19:24:49.930280922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:49.931218 containerd[2017]: time="2026-04-13T19:24:49.930317930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:49.931218 containerd[2017]: time="2026-04-13T19:24:49.931006598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:49.984844 systemd[1]: Started cri-containerd-345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155.scope - libcontainer container 345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155. Apr 13 19:24:50.007623 kubelet[3431]: E0413 19:24:50.007389 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.008201 kubelet[3431]: W0413 19:24:50.007566 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.008201 kubelet[3431]: E0413 19:24:50.007735 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.008843 kubelet[3431]: E0413 19:24:50.008790 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.008843 kubelet[3431]: W0413 19:24:50.008825 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.009178 kubelet[3431]: E0413 19:24:50.008856 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.014249 kubelet[3431]: E0413 19:24:50.013863 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.014249 kubelet[3431]: W0413 19:24:50.014228 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.014904 kubelet[3431]: E0413 19:24:50.014815 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.020015 kubelet[3431]: E0413 19:24:50.019835 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.020015 kubelet[3431]: W0413 19:24:50.019924 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.020714 kubelet[3431]: E0413 19:24:50.019960 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.021600 kubelet[3431]: E0413 19:24:50.021383 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.021600 kubelet[3431]: W0413 19:24:50.021429 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.021997 kubelet[3431]: E0413 19:24:50.021706 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.023003 kubelet[3431]: E0413 19:24:50.022711 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.023003 kubelet[3431]: W0413 19:24:50.022742 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.023003 kubelet[3431]: E0413 19:24:50.022769 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.024279 kubelet[3431]: E0413 19:24:50.024151 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.024279 kubelet[3431]: W0413 19:24:50.024178 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.024279 kubelet[3431]: E0413 19:24:50.024206 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.024939 kubelet[3431]: E0413 19:24:50.024898 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.024939 kubelet[3431]: W0413 19:24:50.024932 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.025141 kubelet[3431]: E0413 19:24:50.024967 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.026281 kubelet[3431]: E0413 19:24:50.026189 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.026281 kubelet[3431]: W0413 19:24:50.026250 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.026575 kubelet[3431]: E0413 19:24:50.026285 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.028050 kubelet[3431]: E0413 19:24:50.027999 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.028050 kubelet[3431]: W0413 19:24:50.028042 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.028348 kubelet[3431]: E0413 19:24:50.028075 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.028992 kubelet[3431]: E0413 19:24:50.028945 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.028992 kubelet[3431]: W0413 19:24:50.028982 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.029419 kubelet[3431]: E0413 19:24:50.029014 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.030019 kubelet[3431]: E0413 19:24:50.029960 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.030019 kubelet[3431]: W0413 19:24:50.029999 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.030318 kubelet[3431]: E0413 19:24:50.030032 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.031830 kubelet[3431]: E0413 19:24:50.031779 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.031830 kubelet[3431]: W0413 19:24:50.031817 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.031830 kubelet[3431]: E0413 19:24:50.031852 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.032509 kubelet[3431]: E0413 19:24:50.032440 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.032509 kubelet[3431]: W0413 19:24:50.032503 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.032890 kubelet[3431]: E0413 19:24:50.032533 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.033710 kubelet[3431]: E0413 19:24:50.033609 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.033710 kubelet[3431]: W0413 19:24:50.033669 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.034002 kubelet[3431]: E0413 19:24:50.033700 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.035477 kubelet[3431]: E0413 19:24:50.035395 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.035477 kubelet[3431]: W0413 19:24:50.035436 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.035744 kubelet[3431]: E0413 19:24:50.035512 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.036432 kubelet[3431]: E0413 19:24:50.036384 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.036432 kubelet[3431]: W0413 19:24:50.036421 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.036712 kubelet[3431]: E0413 19:24:50.036498 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.037176 kubelet[3431]: E0413 19:24:50.037116 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.037176 kubelet[3431]: W0413 19:24:50.037150 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.037176 kubelet[3431]: E0413 19:24:50.037179 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.038358 kubelet[3431]: E0413 19:24:50.038232 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.038358 kubelet[3431]: W0413 19:24:50.038268 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.038358 kubelet[3431]: E0413 19:24:50.038304 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.039627 kubelet[3431]: E0413 19:24:50.038733 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.039627 kubelet[3431]: W0413 19:24:50.038764 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.039627 kubelet[3431]: E0413 19:24:50.038789 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.039627 kubelet[3431]: E0413 19:24:50.039517 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.039627 kubelet[3431]: W0413 19:24:50.039543 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.040128 kubelet[3431]: E0413 19:24:50.039765 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.041238 kubelet[3431]: E0413 19:24:50.040679 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.041238 kubelet[3431]: W0413 19:24:50.040719 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.041238 kubelet[3431]: E0413 19:24:50.040752 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.044033 kubelet[3431]: E0413 19:24:50.043849 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.044033 kubelet[3431]: W0413 19:24:50.043885 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.044033 kubelet[3431]: E0413 19:24:50.043917 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.046573 kubelet[3431]: E0413 19:24:50.045351 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.046573 kubelet[3431]: W0413 19:24:50.045386 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.046573 kubelet[3431]: E0413 19:24:50.045416 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.049147 kubelet[3431]: E0413 19:24:50.047522 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.049147 kubelet[3431]: W0413 19:24:50.047689 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.049147 kubelet[3431]: E0413 19:24:50.047737 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.079960 kubelet[3431]: E0413 19:24:50.079233 3431 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 19:24:50.081360 kubelet[3431]: W0413 19:24:50.080632 3431 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 19:24:50.083106 kubelet[3431]: E0413 19:24:50.081663 3431 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 19:24:50.097122 containerd[2017]: time="2026-04-13T19:24:50.097048331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qjr8g,Uid:c86e1ba7-5f4e-4fc9-81da-a46c76ced039,Namespace:calico-system,Attempt:0,} returns sandbox id \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\"" Apr 13 19:24:50.100498 containerd[2017]: time="2026-04-13T19:24:50.100413443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 19:24:50.140227 containerd[2017]: time="2026-04-13T19:24:50.140156471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599fc4f64c-85vlk,Uid:dee0f259-ff00-4696-bfeb-eb2dfff9d899,Namespace:calico-system,Attempt:0,} returns sandbox id \"eaceb40dc9673e6dbd8ffcb1061eb95d0420862ef098687bbeef84a9039e268b\"" Apr 13 19:24:51.122509 kubelet[3431]: E0413 19:24:51.121861 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:24:51.510523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197411795.mount: Deactivated successfully. Apr 13 19:24:51.655235 containerd[2017]: time="2026-04-13T19:24:51.654713258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:51.657419 containerd[2017]: time="2026-04-13T19:24:51.657077726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=5855345" Apr 13 19:24:51.659586 containerd[2017]: time="2026-04-13T19:24:51.659493206Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:51.664990 containerd[2017]: time="2026-04-13T19:24:51.664561502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:51.666258 containerd[2017]: time="2026-04-13T19:24:51.666118046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.564296919s" Apr 13 19:24:51.666258 containerd[2017]: time="2026-04-13T19:24:51.666183014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 19:24:51.669933 containerd[2017]: time="2026-04-13T19:24:51.669730838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 19:24:51.677442 containerd[2017]: time="2026-04-13T19:24:51.677341094Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 19:24:51.707385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020171569.mount: Deactivated successfully. Apr 13 19:24:51.711131 containerd[2017]: time="2026-04-13T19:24:51.710200539Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658\"" Apr 13 19:24:51.712322 containerd[2017]: time="2026-04-13T19:24:51.712085919Z" level=info msg="StartContainer for \"8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658\"" Apr 13 19:24:51.771774 systemd[1]: Started cri-containerd-8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658.scope - libcontainer container 8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658. Apr 13 19:24:51.831893 containerd[2017]: time="2026-04-13T19:24:51.831780207Z" level=info msg="StartContainer for \"8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658\" returns successfully" Apr 13 19:24:51.868334 systemd[1]: cri-containerd-8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658.scope: Deactivated successfully. Apr 13 19:24:52.079279 containerd[2017]: time="2026-04-13T19:24:52.078746244Z" level=info msg="shim disconnected" id=8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658 namespace=k8s.io Apr 13 19:24:52.079279 containerd[2017]: time="2026-04-13T19:24:52.078824124Z" level=warning msg="cleaning up after shim disconnected" id=8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658 namespace=k8s.io Apr 13 19:24:52.079279 containerd[2017]: time="2026-04-13T19:24:52.078846288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:52.107961 containerd[2017]: time="2026-04-13T19:24:52.107893129Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:24:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:24:52.463989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fa8d036388e5b6f04402edd5e60fa3e82762f723c97bca13220dc5ebf992658-rootfs.mount: Deactivated successfully. Apr 13 19:24:53.122070 kubelet[3431]: E0413 19:24:53.121991 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:24:53.836486 containerd[2017]: time="2026-04-13T19:24:53.835080485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:53.837269 containerd[2017]: time="2026-04-13T19:24:53.837226001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=32467511" Apr 13 19:24:53.839689 containerd[2017]: time="2026-04-13T19:24:53.839650049Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:53.845086 containerd[2017]: time="2026-04-13T19:24:53.845017901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:53.847013 containerd[2017]: time="2026-04-13T19:24:53.846947525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.177122595s" Apr 13 19:24:53.847013 containerd[2017]: time="2026-04-13T19:24:53.847008557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 19:24:53.849924 containerd[2017]: time="2026-04-13T19:24:53.849875633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 19:24:53.897057 containerd[2017]: time="2026-04-13T19:24:53.896984430Z" level=info msg="CreateContainer within sandbox \"eaceb40dc9673e6dbd8ffcb1061eb95d0420862ef098687bbeef84a9039e268b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 19:24:53.925797 containerd[2017]: time="2026-04-13T19:24:53.925721286Z" level=info msg="CreateContainer within sandbox \"eaceb40dc9673e6dbd8ffcb1061eb95d0420862ef098687bbeef84a9039e268b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9e234c6e5411fb435cd7dcac33d5aa3187e73506f8a3c9af0eaf463292570ae0\"" Apr 13 19:24:53.927806 containerd[2017]: time="2026-04-13T19:24:53.926842734Z" level=info msg="StartContainer for \"9e234c6e5411fb435cd7dcac33d5aa3187e73506f8a3c9af0eaf463292570ae0\"" Apr 13 19:24:53.981784 systemd[1]: Started cri-containerd-9e234c6e5411fb435cd7dcac33d5aa3187e73506f8a3c9af0eaf463292570ae0.scope - libcontainer container 9e234c6e5411fb435cd7dcac33d5aa3187e73506f8a3c9af0eaf463292570ae0. Apr 13 19:24:54.051714 containerd[2017]: time="2026-04-13T19:24:54.050224130Z" level=info msg="StartContainer for \"9e234c6e5411fb435cd7dcac33d5aa3187e73506f8a3c9af0eaf463292570ae0\" returns successfully" Apr 13 19:24:54.343047 kubelet[3431]: I0413 19:24:54.342939 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599fc4f64c-85vlk" podStartSLOduration=1.636608162 podStartE2EDuration="5.342916936s" podCreationTimestamp="2026-04-13 19:24:49 +0000 UTC" firstStartedPulling="2026-04-13 19:24:50.143272019 +0000 UTC m=+29.297210715" lastFinishedPulling="2026-04-13 19:24:53.849580781 +0000 UTC m=+33.003519489" observedRunningTime="2026-04-13 19:24:54.341110996 +0000 UTC m=+33.495049716" watchObservedRunningTime="2026-04-13 19:24:54.342916936 +0000 UTC m=+33.496855644" Apr 13 19:24:55.125487 kubelet[3431]: E0413 19:24:55.122580 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:24:55.298909 kubelet[3431]: I0413 19:24:55.298847 3431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:24:57.122127 kubelet[3431]: E0413 19:24:57.121987 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:24:59.123841 kubelet[3431]: E0413 19:24:59.122752 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:25:00.359488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722723464.mount: Deactivated successfully. Apr 13 19:25:00.419962 containerd[2017]: time="2026-04-13T19:25:00.419875474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:00.421975 containerd[2017]: time="2026-04-13T19:25:00.421679242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 19:25:00.424550 containerd[2017]: time="2026-04-13T19:25:00.424026622Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:00.431055 containerd[2017]: time="2026-04-13T19:25:00.429262858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:00.432306 containerd[2017]: time="2026-04-13T19:25:00.432251278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.581927793s" Apr 13 19:25:00.432622 containerd[2017]: time="2026-04-13T19:25:00.432586846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 19:25:00.441889 containerd[2017]: time="2026-04-13T19:25:00.441838582Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 19:25:00.476131 containerd[2017]: time="2026-04-13T19:25:00.476075446Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e\"" Apr 13 19:25:00.477597 containerd[2017]: time="2026-04-13T19:25:00.477535774Z" level=info msg="StartContainer for \"d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e\"" Apr 13 19:25:00.536184 systemd[1]: Started cri-containerd-d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e.scope - libcontainer container d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e. Apr 13 19:25:00.589946 containerd[2017]: time="2026-04-13T19:25:00.589842107Z" level=info msg="StartContainer for \"d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e\" returns successfully" Apr 13 19:25:00.789316 systemd[1]: cri-containerd-d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e.scope: Deactivated successfully. Apr 13 19:25:01.123057 kubelet[3431]: E0413 19:25:01.122902 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:25:01.361899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e-rootfs.mount: Deactivated successfully. Apr 13 19:25:01.775949 containerd[2017]: time="2026-04-13T19:25:01.775569505Z" level=info msg="shim disconnected" id=d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e namespace=k8s.io Apr 13 19:25:01.775949 containerd[2017]: time="2026-04-13T19:25:01.775649545Z" level=warning msg="cleaning up after shim disconnected" id=d226ec17728179b00189e8166a91acb9e0a45f292d51d40c4ff92f83c64e112e namespace=k8s.io Apr 13 19:25:01.775949 containerd[2017]: time="2026-04-13T19:25:01.775672525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:02.335414 containerd[2017]: time="2026-04-13T19:25:02.335251499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 19:25:03.123505 kubelet[3431]: E0413 19:25:03.122109 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:25:04.335708 kubelet[3431]: I0413 19:25:04.335649 3431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:05.124315 kubelet[3431]: E0413 19:25:05.122559 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:25:06.614012 containerd[2017]: time="2026-04-13T19:25:06.613948229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:06.617741 containerd[2017]: time="2026-04-13T19:25:06.617675465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 19:25:06.622413 containerd[2017]: time="2026-04-13T19:25:06.622360529Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:06.628157 containerd[2017]: time="2026-04-13T19:25:06.628077329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:06.630777 containerd[2017]: time="2026-04-13T19:25:06.630701081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 4.295368042s" Apr 13 19:25:06.630777 containerd[2017]: time="2026-04-13T19:25:06.630765089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 19:25:06.640700 containerd[2017]: time="2026-04-13T19:25:06.640505765Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 19:25:06.667442 containerd[2017]: time="2026-04-13T19:25:06.667365293Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3\"" Apr 13 19:25:06.668810 containerd[2017]: time="2026-04-13T19:25:06.668596217Z" level=info msg="StartContainer for \"6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3\"" Apr 13 19:25:06.734786 systemd[1]: Started cri-containerd-6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3.scope - libcontainer container 6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3. Apr 13 19:25:06.798478 containerd[2017]: time="2026-04-13T19:25:06.797276010Z" level=info msg="StartContainer for \"6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3\" returns successfully" Apr 13 19:25:07.123836 kubelet[3431]: E0413 19:25:07.123758 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:25:08.635521 containerd[2017]: time="2026-04-13T19:25:08.635417359Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:25:08.640641 systemd[1]: cri-containerd-6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3.scope: Deactivated successfully. Apr 13 19:25:08.641560 systemd[1]: cri-containerd-6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3.scope: Consumed 1.026s CPU time. Apr 13 19:25:08.683427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3-rootfs.mount: Deactivated successfully. Apr 13 19:25:08.693555 containerd[2017]: time="2026-04-13T19:25:08.693411283Z" level=info msg="shim disconnected" id=6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3 namespace=k8s.io Apr 13 19:25:08.693555 containerd[2017]: time="2026-04-13T19:25:08.693555379Z" level=warning msg="cleaning up after shim disconnected" id=6d264649366fde17c9be1b9361f6e46c0855994367582f62a776c0a757ccc9f3 namespace=k8s.io Apr 13 19:25:08.694018 containerd[2017]: time="2026-04-13T19:25:08.693579115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:08.719569 kubelet[3431]: I0413 19:25:08.719389 3431 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 19:25:08.798945 systemd[1]: Created slice kubepods-burstable-podd9085047_db0c_44e3_8b9c_c8fdeea9cd63.slice - libcontainer container kubepods-burstable-podd9085047_db0c_44e3_8b9c_c8fdeea9cd63.slice. Apr 13 19:25:08.820550 systemd[1]: Created slice kubepods-besteffort-pod519803bd_aa51_492a_ba0b_1cc7713863b8.slice - libcontainer container kubepods-besteffort-pod519803bd_aa51_492a_ba0b_1cc7713863b8.slice. Apr 13 19:25:08.870300 systemd[1]: Created slice kubepods-besteffort-pod2180400b_1490_4820_88c4_4c8f7911915a.slice - libcontainer container kubepods-besteffort-pod2180400b_1490_4820_88c4_4c8f7911915a.slice. Apr 13 19:25:08.872903 kubelet[3431]: I0413 19:25:08.872842 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp4pd\" (UniqueName: \"kubernetes.io/projected/d9085047-db0c-44e3-8b9c-c8fdeea9cd63-kube-api-access-hp4pd\") pod \"coredns-674b8bbfcf-nbvwd\" (UID: \"d9085047-db0c-44e3-8b9c-c8fdeea9cd63\") " pod="kube-system/coredns-674b8bbfcf-nbvwd" Apr 13 19:25:08.872903 kubelet[3431]: I0413 19:25:08.872913 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxg4z\" (UniqueName: \"kubernetes.io/projected/519803bd-aa51-492a-ba0b-1cc7713863b8-kube-api-access-dxg4z\") pod \"calico-apiserver-648588977d-qbzbd\" (UID: \"519803bd-aa51-492a-ba0b-1cc7713863b8\") " pod="calico-system/calico-apiserver-648588977d-qbzbd" Apr 13 19:25:08.873153 kubelet[3431]: I0413 19:25:08.872960 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9085047-db0c-44e3-8b9c-c8fdeea9cd63-config-volume\") pod \"coredns-674b8bbfcf-nbvwd\" (UID: \"d9085047-db0c-44e3-8b9c-c8fdeea9cd63\") " pod="kube-system/coredns-674b8bbfcf-nbvwd" Apr 13 19:25:08.873153 kubelet[3431]: I0413 19:25:08.873013 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/519803bd-aa51-492a-ba0b-1cc7713863b8-calico-apiserver-certs\") pod \"calico-apiserver-648588977d-qbzbd\" (UID: \"519803bd-aa51-492a-ba0b-1cc7713863b8\") " pod="calico-system/calico-apiserver-648588977d-qbzbd" Apr 13 19:25:08.895617 systemd[1]: Created slice kubepods-besteffort-pod1adedd17_1d2f_4205_a531_c8bdcaf6fdc9.slice - libcontainer container kubepods-besteffort-pod1adedd17_1d2f_4205_a531_c8bdcaf6fdc9.slice. Apr 13 19:25:08.919761 systemd[1]: Created slice kubepods-besteffort-podca478177_20bf_4954_9621_ef6793bbf95a.slice - libcontainer container kubepods-besteffort-podca478177_20bf_4954_9621_ef6793bbf95a.slice. Apr 13 19:25:08.931271 systemd[1]: Created slice kubepods-burstable-pod31e132ac_0e5d_4c85_a51c_36b4b5148995.slice - libcontainer container kubepods-burstable-pod31e132ac_0e5d_4c85_a51c_36b4b5148995.slice. Apr 13 19:25:08.951532 systemd[1]: Created slice kubepods-besteffort-pod856ccb99_242b_42ff_a8da_016e9416c1be.slice - libcontainer container kubepods-besteffort-pod856ccb99_242b_42ff_a8da_016e9416c1be.slice. Apr 13 19:25:08.974177 kubelet[3431]: I0413 19:25:08.974041 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmk9b\" (UniqueName: \"kubernetes.io/projected/2180400b-1490-4820-88c4-4c8f7911915a-kube-api-access-lmk9b\") pod \"whisker-6d6694fbb6-hdzjx\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " pod="calico-system/whisker-6d6694fbb6-hdzjx" Apr 13 19:25:08.974177 kubelet[3431]: I0413 19:25:08.974113 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-nginx-config\") pod \"whisker-6d6694fbb6-hdzjx\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " pod="calico-system/whisker-6d6694fbb6-hdzjx" Apr 13 19:25:08.974177 kubelet[3431]: I0413 19:25:08.974161 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6j6\" (UniqueName: \"kubernetes.io/projected/856ccb99-242b-42ff-a8da-016e9416c1be-kube-api-access-sx6j6\") pod \"calico-kube-controllers-6dfbd8bf44-pjxdt\" (UID: \"856ccb99-242b-42ff-a8da-016e9416c1be\") " pod="calico-system/calico-kube-controllers-6dfbd8bf44-pjxdt" Apr 13 19:25:08.974540 kubelet[3431]: I0413 19:25:08.974203 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31e132ac-0e5d-4c85-a51c-36b4b5148995-config-volume\") pod \"coredns-674b8bbfcf-gtvnr\" (UID: \"31e132ac-0e5d-4c85-a51c-36b4b5148995\") " pod="kube-system/coredns-674b8bbfcf-gtvnr" Apr 13 19:25:08.974540 kubelet[3431]: I0413 19:25:08.974243 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1adedd17-1d2f-4205-a531-c8bdcaf6fdc9-calico-apiserver-certs\") pod \"calico-apiserver-648588977d-fkz57\" (UID: \"1adedd17-1d2f-4205-a531-c8bdcaf6fdc9\") " pod="calico-system/calico-apiserver-648588977d-fkz57" Apr 13 19:25:08.974540 kubelet[3431]: I0413 19:25:08.974288 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca478177-20bf-4954-9621-ef6793bbf95a-config\") pod \"goldmane-5b85766d88-b6rr7\" (UID: \"ca478177-20bf-4954-9621-ef6793bbf95a\") " pod="calico-system/goldmane-5b85766d88-b6rr7" Apr 13 19:25:08.974540 kubelet[3431]: I0413 19:25:08.974328 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca478177-20bf-4954-9621-ef6793bbf95a-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-b6rr7\" (UID: \"ca478177-20bf-4954-9621-ef6793bbf95a\") " pod="calico-system/goldmane-5b85766d88-b6rr7" Apr 13 19:25:08.974540 kubelet[3431]: I0413 19:25:08.974363 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2180400b-1490-4820-88c4-4c8f7911915a-whisker-backend-key-pair\") pod \"whisker-6d6694fbb6-hdzjx\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " pod="calico-system/whisker-6d6694fbb6-hdzjx" Apr 13 19:25:08.975014 kubelet[3431]: I0413 19:25:08.974407 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ca478177-20bf-4954-9621-ef6793bbf95a-goldmane-key-pair\") pod \"goldmane-5b85766d88-b6rr7\" (UID: \"ca478177-20bf-4954-9621-ef6793bbf95a\") " pod="calico-system/goldmane-5b85766d88-b6rr7" Apr 13 19:25:08.975014 kubelet[3431]: I0413 19:25:08.974444 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-whisker-ca-bundle\") pod \"whisker-6d6694fbb6-hdzjx\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " pod="calico-system/whisker-6d6694fbb6-hdzjx" Apr 13 19:25:08.975014 kubelet[3431]: I0413 19:25:08.974549 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/856ccb99-242b-42ff-a8da-016e9416c1be-tigera-ca-bundle\") pod \"calico-kube-controllers-6dfbd8bf44-pjxdt\" (UID: \"856ccb99-242b-42ff-a8da-016e9416c1be\") " pod="calico-system/calico-kube-controllers-6dfbd8bf44-pjxdt" Apr 13 19:25:08.975014 kubelet[3431]: I0413 19:25:08.974587 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfng\" (UniqueName: \"kubernetes.io/projected/1adedd17-1d2f-4205-a531-c8bdcaf6fdc9-kube-api-access-wtfng\") pod \"calico-apiserver-648588977d-fkz57\" (UID: \"1adedd17-1d2f-4205-a531-c8bdcaf6fdc9\") " pod="calico-system/calico-apiserver-648588977d-fkz57" Apr 13 19:25:08.975014 kubelet[3431]: I0413 19:25:08.974646 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjr84\" (UniqueName: \"kubernetes.io/projected/ca478177-20bf-4954-9621-ef6793bbf95a-kube-api-access-qjr84\") pod \"goldmane-5b85766d88-b6rr7\" (UID: \"ca478177-20bf-4954-9621-ef6793bbf95a\") " pod="calico-system/goldmane-5b85766d88-b6rr7" Apr 13 19:25:08.975532 kubelet[3431]: I0413 19:25:08.974695 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qh4n\" (UniqueName: \"kubernetes.io/projected/31e132ac-0e5d-4c85-a51c-36b4b5148995-kube-api-access-4qh4n\") pod \"coredns-674b8bbfcf-gtvnr\" (UID: \"31e132ac-0e5d-4c85-a51c-36b4b5148995\") " pod="kube-system/coredns-674b8bbfcf-gtvnr" Apr 13 19:25:09.147812 containerd[2017]: time="2026-04-13T19:25:09.146925413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-qbzbd,Uid:519803bd-aa51-492a-ba0b-1cc7713863b8,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:09.157792 containerd[2017]: time="2026-04-13T19:25:09.152282117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbvwd,Uid:d9085047-db0c-44e3-8b9c-c8fdeea9cd63,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:09.156250 systemd[1]: Created slice kubepods-besteffort-pod90202ccf_b846_4ffb_bfc4_994f0a0246ae.slice - libcontainer container kubepods-besteffort-pod90202ccf_b846_4ffb_bfc4_994f0a0246ae.slice. Apr 13 19:25:09.166493 containerd[2017]: time="2026-04-13T19:25:09.166000709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrfts,Uid:90202ccf-b846-4ffb-bfc4-994f0a0246ae,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:09.186303 containerd[2017]: time="2026-04-13T19:25:09.186246245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d6694fbb6-hdzjx,Uid:2180400b-1490-4820-88c4-4c8f7911915a,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:09.206871 containerd[2017]: time="2026-04-13T19:25:09.206530578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-fkz57,Uid:1adedd17-1d2f-4205-a531-c8bdcaf6fdc9,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:09.234005 containerd[2017]: time="2026-04-13T19:25:09.233524914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-b6rr7,Uid:ca478177-20bf-4954-9621-ef6793bbf95a,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:09.245477 containerd[2017]: time="2026-04-13T19:25:09.245372142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtvnr,Uid:31e132ac-0e5d-4c85-a51c-36b4b5148995,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:09.259858 containerd[2017]: time="2026-04-13T19:25:09.259787466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfbd8bf44-pjxdt,Uid:856ccb99-242b-42ff-a8da-016e9416c1be,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:09.476388 containerd[2017]: time="2026-04-13T19:25:09.476201839Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 19:25:09.594719 containerd[2017]: time="2026-04-13T19:25:09.594644995Z" level=info msg="CreateContainer within sandbox \"345f3173c18ed7e7c603a6ae452f920d25e6807ba40e9fd2b2525f483275c155\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1526be1e88d9be479ab75934b3020217b467fa863d9c31d36e9ee08b847b4a1c\"" Apr 13 19:25:09.596299 containerd[2017]: time="2026-04-13T19:25:09.595611667Z" level=info msg="StartContainer for \"1526be1e88d9be479ab75934b3020217b467fa863d9c31d36e9ee08b847b4a1c\"" Apr 13 19:25:09.836059 containerd[2017]: time="2026-04-13T19:25:09.835850361Z" level=error msg="Failed to destroy network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.842832 containerd[2017]: time="2026-04-13T19:25:09.842737821Z" level=error msg="encountered an error cleaning up failed sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.843001 containerd[2017]: time="2026-04-13T19:25:09.842862489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-qbzbd,Uid:519803bd-aa51-492a-ba0b-1cc7713863b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.844025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8-shm.mount: Deactivated successfully. Apr 13 19:25:09.844793 kubelet[3431]: E0413 19:25:09.844626 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.844793 kubelet[3431]: E0413 19:25:09.844731 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-648588977d-qbzbd" Apr 13 19:25:09.844793 kubelet[3431]: E0413 19:25:09.844767 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-648588977d-qbzbd" Apr 13 19:25:09.845949 kubelet[3431]: E0413 19:25:09.844846 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-648588977d-qbzbd_calico-system(519803bd-aa51-492a-ba0b-1cc7713863b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-648588977d-qbzbd_calico-system(519803bd-aa51-492a-ba0b-1cc7713863b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-648588977d-qbzbd" podUID="519803bd-aa51-492a-ba0b-1cc7713863b8" Apr 13 19:25:09.885441 containerd[2017]: time="2026-04-13T19:25:09.885209961Z" level=error msg="Failed to destroy network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.886689 containerd[2017]: time="2026-04-13T19:25:09.886402197Z" level=error msg="encountered an error cleaning up failed sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.886689 containerd[2017]: time="2026-04-13T19:25:09.886536321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d6694fbb6-hdzjx,Uid:2180400b-1490-4820-88c4-4c8f7911915a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.889544 kubelet[3431]: E0413 19:25:09.887537 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.889544 kubelet[3431]: E0413 19:25:09.887640 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d6694fbb6-hdzjx" Apr 13 19:25:09.889544 kubelet[3431]: E0413 19:25:09.887685 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d6694fbb6-hdzjx" Apr 13 19:25:09.889936 kubelet[3431]: E0413 19:25:09.887775 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d6694fbb6-hdzjx_calico-system(2180400b-1490-4820-88c4-4c8f7911915a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d6694fbb6-hdzjx_calico-system(2180400b-1490-4820-88c4-4c8f7911915a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d6694fbb6-hdzjx" podUID="2180400b-1490-4820-88c4-4c8f7911915a" Apr 13 19:25:09.893982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a-shm.mount: Deactivated successfully. Apr 13 19:25:09.957835 containerd[2017]: time="2026-04-13T19:25:09.957681273Z" level=error msg="Failed to destroy network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.967374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715-shm.mount: Deactivated successfully. Apr 13 19:25:09.968769 containerd[2017]: time="2026-04-13T19:25:09.967065069Z" level=error msg="encountered an error cleaning up failed sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.969134 containerd[2017]: time="2026-04-13T19:25:09.969051585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfbd8bf44-pjxdt,Uid:856ccb99-242b-42ff-a8da-016e9416c1be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.971381 kubelet[3431]: E0413 19:25:09.969668 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:09.971381 kubelet[3431]: E0413 19:25:09.969757 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dfbd8bf44-pjxdt" Apr 13 19:25:09.971381 kubelet[3431]: E0413 19:25:09.969794 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dfbd8bf44-pjxdt" Apr 13 19:25:09.971988 kubelet[3431]: E0413 19:25:09.969870 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dfbd8bf44-pjxdt_calico-system(856ccb99-242b-42ff-a8da-016e9416c1be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dfbd8bf44-pjxdt_calico-system(856ccb99-242b-42ff-a8da-016e9416c1be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dfbd8bf44-pjxdt" podUID="856ccb99-242b-42ff-a8da-016e9416c1be" Apr 13 19:25:09.986402 systemd[1]: Started cri-containerd-1526be1e88d9be479ab75934b3020217b467fa863d9c31d36e9ee08b847b4a1c.scope - libcontainer container 1526be1e88d9be479ab75934b3020217b467fa863d9c31d36e9ee08b847b4a1c. Apr 13 19:25:10.015840 containerd[2017]: time="2026-04-13T19:25:10.015761202Z" level=error msg="Failed to destroy network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.017667 containerd[2017]: time="2026-04-13T19:25:10.017606106Z" level=error msg="Failed to destroy network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.018934 containerd[2017]: time="2026-04-13T19:25:10.018251478Z" level=error msg="encountered an error cleaning up failed sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.018934 containerd[2017]: time="2026-04-13T19:25:10.018783354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrfts,Uid:90202ccf-b846-4ffb-bfc4-994f0a0246ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.021782 kubelet[3431]: E0413 19:25:10.021086 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.021782 kubelet[3431]: E0413 19:25:10.021194 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrfts" Apr 13 19:25:10.021782 kubelet[3431]: E0413 19:25:10.021237 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrfts" Apr 13 19:25:10.022093 kubelet[3431]: E0413 19:25:10.021332 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hrfts_calico-system(90202ccf-b846-4ffb-bfc4-994f0a0246ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hrfts_calico-system(90202ccf-b846-4ffb-bfc4-994f0a0246ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hrfts" podUID="90202ccf-b846-4ffb-bfc4-994f0a0246ae" Apr 13 19:25:10.023664 containerd[2017]: time="2026-04-13T19:25:10.020434806Z" level=error msg="encountered an error cleaning up failed sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.023664 containerd[2017]: time="2026-04-13T19:25:10.022708518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-fkz57,Uid:1adedd17-1d2f-4205-a531-c8bdcaf6fdc9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.025254 kubelet[3431]: E0413 19:25:10.024202 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.025254 kubelet[3431]: E0413 19:25:10.024276 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-648588977d-fkz57" Apr 13 19:25:10.025254 kubelet[3431]: E0413 19:25:10.024310 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-648588977d-fkz57" Apr 13 19:25:10.025507 kubelet[3431]: E0413 19:25:10.024397 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-648588977d-fkz57_calico-system(1adedd17-1d2f-4205-a531-c8bdcaf6fdc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-648588977d-fkz57_calico-system(1adedd17-1d2f-4205-a531-c8bdcaf6fdc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-648588977d-fkz57" podUID="1adedd17-1d2f-4205-a531-c8bdcaf6fdc9" Apr 13 19:25:10.045383 containerd[2017]: time="2026-04-13T19:25:10.044642022Z" level=error msg="Failed to destroy network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.045383 containerd[2017]: time="2026-04-13T19:25:10.045249894Z" level=error msg="encountered an error cleaning up failed sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.045383 containerd[2017]: time="2026-04-13T19:25:10.045339450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbvwd,Uid:d9085047-db0c-44e3-8b9c-c8fdeea9cd63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.047411 kubelet[3431]: E0413 19:25:10.045933 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.047411 kubelet[3431]: E0413 19:25:10.046012 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nbvwd" Apr 13 19:25:10.047411 kubelet[3431]: E0413 19:25:10.046074 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nbvwd" Apr 13 19:25:10.047759 kubelet[3431]: E0413 19:25:10.046167 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nbvwd_kube-system(d9085047-db0c-44e3-8b9c-c8fdeea9cd63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nbvwd_kube-system(d9085047-db0c-44e3-8b9c-c8fdeea9cd63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nbvwd" podUID="d9085047-db0c-44e3-8b9c-c8fdeea9cd63" Apr 13 19:25:10.095995 containerd[2017]: time="2026-04-13T19:25:10.094002594Z" level=error msg="Failed to destroy network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.100428 containerd[2017]: time="2026-04-13T19:25:10.099497778Z" level=error msg="encountered an error cleaning up failed sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.100428 containerd[2017]: time="2026-04-13T19:25:10.099583170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-b6rr7,Uid:ca478177-20bf-4954-9621-ef6793bbf95a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.100428 containerd[2017]: time="2026-04-13T19:25:10.099684534Z" level=info msg="StartContainer for \"1526be1e88d9be479ab75934b3020217b467fa863d9c31d36e9ee08b847b4a1c\" returns successfully" Apr 13 19:25:10.101897 kubelet[3431]: E0413 19:25:10.099906 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.101897 kubelet[3431]: E0413 19:25:10.099987 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-b6rr7" Apr 13 19:25:10.101897 kubelet[3431]: E0413 19:25:10.100030 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-b6rr7" Apr 13 19:25:10.103437 kubelet[3431]: E0413 19:25:10.100108 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-b6rr7_calico-system(ca478177-20bf-4954-9621-ef6793bbf95a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-b6rr7_calico-system(ca478177-20bf-4954-9621-ef6793bbf95a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-b6rr7" podUID="ca478177-20bf-4954-9621-ef6793bbf95a" Apr 13 19:25:10.111243 containerd[2017]: time="2026-04-13T19:25:10.110989866Z" level=error msg="Failed to destroy network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.113399 containerd[2017]: time="2026-04-13T19:25:10.113320710Z" level=error msg="encountered an error cleaning up failed sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.113735 containerd[2017]: time="2026-04-13T19:25:10.113424234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtvnr,Uid:31e132ac-0e5d-4c85-a51c-36b4b5148995,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.113907 kubelet[3431]: E0413 19:25:10.113811 3431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 19:25:10.113907 kubelet[3431]: E0413 19:25:10.113888 3431 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gtvnr" Apr 13 19:25:10.114066 kubelet[3431]: E0413 19:25:10.113923 3431 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gtvnr" Apr 13 19:25:10.114066 kubelet[3431]: E0413 19:25:10.114021 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gtvnr_kube-system(31e132ac-0e5d-4c85-a51c-36b4b5148995)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gtvnr_kube-system(31e132ac-0e5d-4c85-a51c-36b4b5148995)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gtvnr" podUID="31e132ac-0e5d-4c85-a51c-36b4b5148995" Apr 13 19:25:10.412404 kubelet[3431]: I0413 19:25:10.412347 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:10.412863 containerd[2017]: time="2026-04-13T19:25:10.412801736Z" level=info msg="StopPodSandbox for \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\"" Apr 13 19:25:10.413189 containerd[2017]: time="2026-04-13T19:25:10.413109260Z" level=info msg="Ensure that sandbox b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a in task-service has been cleanup successfully" Apr 13 19:25:10.421832 kubelet[3431]: I0413 19:25:10.421758 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:10.425923 containerd[2017]: time="2026-04-13T19:25:10.425850320Z" level=info msg="StopPodSandbox for \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\"" Apr 13 19:25:10.426267 containerd[2017]: time="2026-04-13T19:25:10.426218564Z" level=info msg="Ensure that sandbox 876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f in task-service has been cleanup successfully" Apr 13 19:25:10.437193 kubelet[3431]: I0413 19:25:10.436064 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:10.440532 containerd[2017]: time="2026-04-13T19:25:10.440170952Z" level=info msg="StopPodSandbox for \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\"" Apr 13 19:25:10.442850 containerd[2017]: time="2026-04-13T19:25:10.442711028Z" level=info msg="Ensure that sandbox 7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8 in task-service has been cleanup successfully" Apr 13 19:25:10.450045 kubelet[3431]: I0413 19:25:10.449332 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:10.452659 containerd[2017]: time="2026-04-13T19:25:10.451732544Z" level=info msg="StopPodSandbox for \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\"" Apr 13 19:25:10.454073 containerd[2017]: time="2026-04-13T19:25:10.453525776Z" level=info msg="Ensure that sandbox 620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661 in task-service has been cleanup successfully" Apr 13 19:25:10.464555 kubelet[3431]: I0413 19:25:10.464387 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:10.469852 containerd[2017]: time="2026-04-13T19:25:10.469586456Z" level=info msg="StopPodSandbox for \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\"" Apr 13 19:25:10.470061 containerd[2017]: time="2026-04-13T19:25:10.469900352Z" level=info msg="Ensure that sandbox 663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d in task-service has been cleanup successfully" Apr 13 19:25:10.480882 kubelet[3431]: I0413 19:25:10.480318 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:10.492327 containerd[2017]: time="2026-04-13T19:25:10.492152372Z" level=info msg="StopPodSandbox for \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\"" Apr 13 19:25:10.502650 containerd[2017]: time="2026-04-13T19:25:10.499468340Z" level=info msg="Ensure that sandbox ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f in task-service has been cleanup successfully" Apr 13 19:25:10.516327 kubelet[3431]: I0413 19:25:10.516254 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:10.520760 containerd[2017]: time="2026-04-13T19:25:10.520702952Z" level=info msg="StopPodSandbox for \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\"" Apr 13 19:25:10.521209 containerd[2017]: time="2026-04-13T19:25:10.521171036Z" level=info msg="Ensure that sandbox 7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc in task-service has been cleanup successfully" Apr 13 19:25:10.565241 kubelet[3431]: I0413 19:25:10.564414 3431 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:10.570941 containerd[2017]: time="2026-04-13T19:25:10.570888932Z" level=info msg="StopPodSandbox for \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\"" Apr 13 19:25:10.578773 containerd[2017]: time="2026-04-13T19:25:10.578313848Z" level=info msg="Ensure that sandbox 9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715 in task-service has been cleanup successfully" Apr 13 19:25:10.611477 kubelet[3431]: I0413 19:25:10.611112 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qjr8g" podStartSLOduration=5.078907395 podStartE2EDuration="21.611085213s" podCreationTimestamp="2026-04-13 19:24:49 +0000 UTC" firstStartedPulling="2026-04-13 19:24:50.099737699 +0000 UTC m=+29.253676407" lastFinishedPulling="2026-04-13 19:25:06.631915529 +0000 UTC m=+45.785854225" observedRunningTime="2026-04-13 19:25:10.610848789 +0000 UTC m=+49.764787545" watchObservedRunningTime="2026-04-13 19:25:10.611085213 +0000 UTC m=+49.765023933" Apr 13 19:25:10.687183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f-shm.mount: Deactivated successfully. Apr 13 19:25:10.687761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc-shm.mount: Deactivated successfully. Apr 13 19:25:10.687929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661-shm.mount: Deactivated successfully. Apr 13 19:25:10.688077 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d-shm.mount: Deactivated successfully. Apr 13 19:25:10.688218 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f-shm.mount: Deactivated successfully. Apr 13 19:25:10.758950 systemd[1]: run-containerd-runc-k8s.io-1526be1e88d9be479ab75934b3020217b467fa863d9c31d36e9ee08b847b4a1c-runc.cGKtr5.mount: Deactivated successfully. Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.143 [INFO][4610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.148 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" iface="eth0" netns="/var/run/netns/cni-e4cd03cc-72d5-8f0c-38ab-0cd0d610ecdc" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.149 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" iface="eth0" netns="/var/run/netns/cni-e4cd03cc-72d5-8f0c-38ab-0cd0d610ecdc" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.155 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" iface="eth0" netns="/var/run/netns/cni-e4cd03cc-72d5-8f0c-38ab-0cd0d610ecdc" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.156 [INFO][4610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.156 [INFO][4610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.603 [INFO][4703] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.603 [INFO][4703] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.603 [INFO][4703] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.675 [WARNING][4703] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.675 [INFO][4703] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.682 [INFO][4703] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:11.708908 containerd[2017]: 2026-04-13 19:25:11.704 [INFO][4610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:11.718443 systemd[1]: run-netns-cni\x2de4cd03cc\x2d72d5\x2d8f0c\x2d38ab\x2d0cd0d610ecdc.mount: Deactivated successfully. Apr 13 19:25:11.721217 containerd[2017]: time="2026-04-13T19:25:11.720686218Z" level=info msg="TearDown network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\" successfully" Apr 13 19:25:11.721217 containerd[2017]: time="2026-04-13T19:25:11.720793090Z" level=info msg="StopPodSandbox for \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\" returns successfully" Apr 13 19:25:11.723483 containerd[2017]: time="2026-04-13T19:25:11.723113146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrfts,Uid:90202ccf-b846-4ffb-bfc4-994f0a0246ae,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.102 [INFO][4595] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.105 [INFO][4595] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" iface="eth0" netns="/var/run/netns/cni-73c57aa0-7041-963a-da32-0cf7ec830d0d" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.108 [INFO][4595] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" iface="eth0" netns="/var/run/netns/cni-73c57aa0-7041-963a-da32-0cf7ec830d0d" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.110 [INFO][4595] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" iface="eth0" netns="/var/run/netns/cni-73c57aa0-7041-963a-da32-0cf7ec830d0d" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.110 [INFO][4595] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.111 [INFO][4595] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.610 [INFO][4695] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.610 [INFO][4695] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.685 [INFO][4695] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.729 [WARNING][4695] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.729 [INFO][4695] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.733 [INFO][4695] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:11.749979 containerd[2017]: 2026-04-13 19:25:11.742 [INFO][4595] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:11.756107 containerd[2017]: time="2026-04-13T19:25:11.751600306Z" level=info msg="TearDown network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\" successfully" Apr 13 19:25:11.756107 containerd[2017]: time="2026-04-13T19:25:11.751661026Z" level=info msg="StopPodSandbox for \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\" returns successfully" Apr 13 19:25:11.756107 containerd[2017]: time="2026-04-13T19:25:11.755910190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbvwd,Uid:d9085047-db0c-44e3-8b9c-c8fdeea9cd63,Namespace:kube-system,Attempt:1,}" Apr 13 19:25:11.756611 systemd[1]: run-netns-cni\x2d73c57aa0\x2d7041\x2d963a\x2dda32\x2d0cf7ec830d0d.mount: Deactivated successfully. Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.139 [INFO][4638] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.146 [INFO][4638] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" iface="eth0" netns="/var/run/netns/cni-7002e284-1e37-da18-9f13-17a187868f1a" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.156 [INFO][4638] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" iface="eth0" netns="/var/run/netns/cni-7002e284-1e37-da18-9f13-17a187868f1a" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.157 [INFO][4638] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" iface="eth0" netns="/var/run/netns/cni-7002e284-1e37-da18-9f13-17a187868f1a" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.160 [INFO][4638] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.160 [INFO][4638] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.640 [INFO][4711] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.640 [INFO][4711] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.735 [INFO][4711] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.760 [WARNING][4711] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.760 [INFO][4711] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.766 [INFO][4711] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:11.811094 containerd[2017]: 2026-04-13 19:25:11.793 [INFO][4638] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:11.820444 containerd[2017]: time="2026-04-13T19:25:11.820163531Z" level=info msg="TearDown network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\" successfully" Apr 13 19:25:11.820444 containerd[2017]: time="2026-04-13T19:25:11.820245707Z" level=info msg="StopPodSandbox for \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\" returns successfully" Apr 13 19:25:11.825539 containerd[2017]: time="2026-04-13T19:25:11.825479531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-b6rr7,Uid:ca478177-20bf-4954-9621-ef6793bbf95a,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.268 [INFO][4613] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.269 [INFO][4613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" iface="eth0" netns="/var/run/netns/cni-d85eb1f4-08bd-1ce5-94c7-82943c32a188" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.269 [INFO][4613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" iface="eth0" netns="/var/run/netns/cni-d85eb1f4-08bd-1ce5-94c7-82943c32a188" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.270 [INFO][4613] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" iface="eth0" netns="/var/run/netns/cni-d85eb1f4-08bd-1ce5-94c7-82943c32a188" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.270 [INFO][4613] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.270 [INFO][4613] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.650 [INFO][4717] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.651 [INFO][4717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.765 [INFO][4717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.820 [WARNING][4717] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.820 [INFO][4717] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.823 [INFO][4717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:11.859380 containerd[2017]: 2026-04-13 19:25:11.838 [INFO][4613] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:11.862036 containerd[2017]: time="2026-04-13T19:25:11.861900875Z" level=info msg="TearDown network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\" successfully" Apr 13 19:25:11.862036 containerd[2017]: time="2026-04-13T19:25:11.861961043Z" level=info msg="StopPodSandbox for \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\" returns successfully" Apr 13 19:25:11.863084 containerd[2017]: time="2026-04-13T19:25:11.862971131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-fkz57,Uid:1adedd17-1d2f-4205-a531-c8bdcaf6fdc9,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.084 [INFO][4640] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.088 [INFO][4640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" iface="eth0" netns="/var/run/netns/cni-aec56a8f-5ca0-2402-8bd9-b36766808b72" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.092 [INFO][4640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" iface="eth0" netns="/var/run/netns/cni-aec56a8f-5ca0-2402-8bd9-b36766808b72" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.108 [INFO][4640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" iface="eth0" netns="/var/run/netns/cni-aec56a8f-5ca0-2402-8bd9-b36766808b72" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.108 [INFO][4640] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.108 [INFO][4640] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.661 [INFO][4694] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.661 [INFO][4694] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.828 [INFO][4694] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.854 [WARNING][4694] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.854 [INFO][4694] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.860 [INFO][4694] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:11.912041 containerd[2017]: 2026-04-13 19:25:11.889 [INFO][4640] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:11.914840 containerd[2017]: time="2026-04-13T19:25:11.914058719Z" level=info msg="TearDown network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\" successfully" Apr 13 19:25:11.914840 containerd[2017]: time="2026-04-13T19:25:11.914135807Z" level=info msg="StopPodSandbox for \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\" returns successfully" Apr 13 19:25:11.916531 containerd[2017]: time="2026-04-13T19:25:11.916298771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtvnr,Uid:31e132ac-0e5d-4c85-a51c-36b4b5148995,Namespace:kube-system,Attempt:1,}" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.077 [INFO][4569] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.078 [INFO][4569] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" iface="eth0" netns="/var/run/netns/cni-31e07f3b-9aca-77c4-f10c-db6c884f1478" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.079 [INFO][4569] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" iface="eth0" netns="/var/run/netns/cni-31e07f3b-9aca-77c4-f10c-db6c884f1478" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.086 [INFO][4569] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" iface="eth0" netns="/var/run/netns/cni-31e07f3b-9aca-77c4-f10c-db6c884f1478" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.086 [INFO][4569] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.086 [INFO][4569] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.690 [INFO][4689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.690 [INFO][4689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.860 [INFO][4689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.915 [WARNING][4689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.917 [INFO][4689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.923 [INFO][4689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:11.958706 containerd[2017]: 2026-04-13 19:25:11.939 [INFO][4569] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:11.962534 containerd[2017]: time="2026-04-13T19:25:11.962328287Z" level=info msg="TearDown network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\" successfully" Apr 13 19:25:11.967489 containerd[2017]: time="2026-04-13T19:25:11.966767399Z" level=info msg="StopPodSandbox for \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\" returns successfully" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.260 [INFO][4607] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.260 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" iface="eth0" netns="/var/run/netns/cni-0b113cb6-6d1e-f755-9a64-d838fe7ac940" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.269 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" iface="eth0" netns="/var/run/netns/cni-0b113cb6-6d1e-f755-9a64-d838fe7ac940" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.287 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" iface="eth0" netns="/var/run/netns/cni-0b113cb6-6d1e-f755-9a64-d838fe7ac940" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.287 [INFO][4607] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.287 [INFO][4607] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.689 [INFO][4723] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.691 [INFO][4723] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.927 [INFO][4723] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.963 [WARNING][4723] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.963 [INFO][4723] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:11.979 [INFO][4723] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:12.033876 containerd[2017]: 2026-04-13 19:25:12.007 [INFO][4607] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:12.037403 containerd[2017]: time="2026-04-13T19:25:12.034194128Z" level=info msg="TearDown network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\" successfully" Apr 13 19:25:12.037403 containerd[2017]: time="2026-04-13T19:25:12.034238636Z" level=info msg="StopPodSandbox for \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\" returns successfully" Apr 13 19:25:12.037403 containerd[2017]: time="2026-04-13T19:25:12.036753212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-qbzbd,Uid:519803bd-aa51-492a-ba0b-1cc7713863b8,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.327 [INFO][4662] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.328 [INFO][4662] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" iface="eth0" netns="/var/run/netns/cni-c22d0171-2bd6-d19b-b4c3-2e0cf961ef85" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.329 [INFO][4662] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" iface="eth0" netns="/var/run/netns/cni-c22d0171-2bd6-d19b-b4c3-2e0cf961ef85" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.336 [INFO][4662] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" iface="eth0" netns="/var/run/netns/cni-c22d0171-2bd6-d19b-b4c3-2e0cf961ef85" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.336 [INFO][4662] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.336 [INFO][4662] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.706 [INFO][4728] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.706 [INFO][4728] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:11.989 [INFO][4728] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:12.053 [WARNING][4728] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:12.053 [INFO][4728] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:12.060 [INFO][4728] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:12.074484 containerd[2017]: 2026-04-13 19:25:12.067 [INFO][4662] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:12.075813 containerd[2017]: time="2026-04-13T19:25:12.075573212Z" level=info msg="TearDown network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\" successfully" Apr 13 19:25:12.075813 containerd[2017]: time="2026-04-13T19:25:12.075646988Z" level=info msg="StopPodSandbox for \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\" returns successfully" Apr 13 19:25:12.078355 containerd[2017]: time="2026-04-13T19:25:12.078173528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfbd8bf44-pjxdt,Uid:856ccb99-242b-42ff-a8da-016e9416c1be,Namespace:calico-system,Attempt:1,}" Apr 13 19:25:12.114816 kubelet[3431]: I0413 19:25:12.114693 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-nginx-config\") pod \"2180400b-1490-4820-88c4-4c8f7911915a\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " Apr 13 19:25:12.114816 kubelet[3431]: I0413 19:25:12.114792 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2180400b-1490-4820-88c4-4c8f7911915a-whisker-backend-key-pair\") pod \"2180400b-1490-4820-88c4-4c8f7911915a\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " Apr 13 19:25:12.116096 kubelet[3431]: I0413 19:25:12.114837 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmk9b\" (UniqueName: \"kubernetes.io/projected/2180400b-1490-4820-88c4-4c8f7911915a-kube-api-access-lmk9b\") pod \"2180400b-1490-4820-88c4-4c8f7911915a\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " Apr 13 19:25:12.116096 kubelet[3431]: I0413 19:25:12.114899 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-whisker-ca-bundle\") pod \"2180400b-1490-4820-88c4-4c8f7911915a\" (UID: \"2180400b-1490-4820-88c4-4c8f7911915a\") " Apr 13 19:25:12.116096 kubelet[3431]: I0413 19:25:12.115659 3431 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2180400b-1490-4820-88c4-4c8f7911915a" (UID: "2180400b-1490-4820-88c4-4c8f7911915a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:25:12.119055 kubelet[3431]: I0413 19:25:12.116745 3431 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "2180400b-1490-4820-88c4-4c8f7911915a" (UID: "2180400b-1490-4820-88c4-4c8f7911915a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:25:12.131949 kubelet[3431]: I0413 19:25:12.131827 3431 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2180400b-1490-4820-88c4-4c8f7911915a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2180400b-1490-4820-88c4-4c8f7911915a" (UID: "2180400b-1490-4820-88c4-4c8f7911915a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:25:12.132502 kubelet[3431]: I0413 19:25:12.132416 3431 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2180400b-1490-4820-88c4-4c8f7911915a-kube-api-access-lmk9b" (OuterVolumeSpecName: "kube-api-access-lmk9b") pod "2180400b-1490-4820-88c4-4c8f7911915a" (UID: "2180400b-1490-4820-88c4-4c8f7911915a"). InnerVolumeSpecName "kube-api-access-lmk9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:25:12.218260 kubelet[3431]: I0413 19:25:12.218077 3431 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2180400b-1490-4820-88c4-4c8f7911915a-whisker-backend-key-pair\") on node \"ip-172-31-27-52\" DevicePath \"\"" Apr 13 19:25:12.220178 kubelet[3431]: I0413 19:25:12.220135 3431 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmk9b\" (UniqueName: \"kubernetes.io/projected/2180400b-1490-4820-88c4-4c8f7911915a-kube-api-access-lmk9b\") on node \"ip-172-31-27-52\" DevicePath \"\"" Apr 13 19:25:12.220527 kubelet[3431]: I0413 19:25:12.220494 3431 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-whisker-ca-bundle\") on node \"ip-172-31-27-52\" DevicePath \"\"" Apr 13 19:25:12.221626 kubelet[3431]: I0413 19:25:12.220716 3431 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2180400b-1490-4820-88c4-4c8f7911915a-nginx-config\") on node \"ip-172-31-27-52\" DevicePath \"\"" Apr 13 19:25:12.636648 systemd[1]: Removed slice kubepods-besteffort-pod2180400b_1490_4820_88c4_4c8f7911915a.slice - libcontainer container kubepods-besteffort-pod2180400b_1490_4820_88c4_4c8f7911915a.slice. Apr 13 19:25:12.746631 systemd[1]: run-netns-cni\x2dc22d0171\x2d2bd6\x2dd19b\x2db4c3\x2d2e0cf961ef85.mount: Deactivated successfully. Apr 13 19:25:12.746820 systemd[1]: run-netns-cni\x2d7002e284\x2d1e37\x2dda18\x2d9f13\x2d17a187868f1a.mount: Deactivated successfully. Apr 13 19:25:12.746951 systemd[1]: run-netns-cni\x2daec56a8f\x2d5ca0\x2d2402\x2d8bd9\x2db36766808b72.mount: Deactivated successfully. Apr 13 19:25:12.747079 systemd[1]: run-netns-cni\x2dd85eb1f4\x2d08bd\x2d1ce5\x2d94c7\x2d82943c32a188.mount: Deactivated successfully. Apr 13 19:25:12.747197 systemd[1]: run-netns-cni\x2d31e07f3b\x2d9aca\x2d77c4\x2df10c\x2ddb6c884f1478.mount: Deactivated successfully. Apr 13 19:25:12.747320 systemd[1]: run-netns-cni\x2d0b113cb6\x2d6d1e\x2df755\x2d9a64\x2dd838fe7ac940.mount: Deactivated successfully. Apr 13 19:25:12.748369 systemd[1]: var-lib-kubelet-pods-2180400b\x2d1490\x2d4820\x2d88c4\x2d4c8f7911915a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlmk9b.mount: Deactivated successfully. Apr 13 19:25:12.748577 systemd[1]: var-lib-kubelet-pods-2180400b\x2d1490\x2d4820\x2d88c4\x2d4c8f7911915a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 19:25:12.811718 systemd-networkd[1936]: cali3da603a8dea: Link UP Apr 13 19:25:12.816253 systemd-networkd[1936]: cali3da603a8dea: Gained carrier Apr 13 19:25:12.826936 (udev-worker)[4936]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.056 [ERROR][4770] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.153 [INFO][4770] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0 csi-node-driver- calico-system 90202ccf-b846-4ffb-bfc4-994f0a0246ae 940 0 2026-04-13 19:24:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-27-52 csi-node-driver-hrfts eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3da603a8dea [] [] }} ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.154 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.466 [INFO][4864] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" HandleID="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.523 [INFO][4864] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" HandleID="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003192a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-52", "pod":"csi-node-driver-hrfts", "timestamp":"2026-04-13 19:25:12.46693483 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002ec000)} Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.524 [INFO][4864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.524 [INFO][4864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.524 [INFO][4864] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.535 [INFO][4864] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.554 [INFO][4864] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.574 [INFO][4864] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.582 [INFO][4864] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.593 [INFO][4864] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.593 [INFO][4864] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.610 [INFO][4864] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.644 [INFO][4864] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.700 [INFO][4864] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.1/26] block=192.168.5.0/26 handle="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.710 [INFO][4864] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.1/26] handle="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" host="ip-172-31-27-52" Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.710 [INFO][4864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:12.918311 containerd[2017]: 2026-04-13 19:25:12.710 [INFO][4864] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.1/26] IPv6=[] ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" HandleID="k8s-pod-network.fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:13.013315 containerd[2017]: 2026-04-13 19:25:12.763 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"90202ccf-b846-4ffb-bfc4-994f0a0246ae", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"csi-node-driver-hrfts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3da603a8dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.013315 containerd[2017]: 2026-04-13 19:25:12.768 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.1/32] ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:13.013315 containerd[2017]: 2026-04-13 19:25:12.768 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3da603a8dea ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:13.013315 containerd[2017]: 2026-04-13 19:25:12.825 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:13.013315 containerd[2017]: 2026-04-13 19:25:12.835 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"90202ccf-b846-4ffb-bfc4-994f0a0246ae", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e", Pod:"csi-node-driver-hrfts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3da603a8dea", MAC:"2e:e5:ce:ff:ee:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.013315 containerd[2017]: 2026-04-13 19:25:12.903 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e" Namespace="calico-system" Pod="csi-node-driver-hrfts" WorkloadEndpoint="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:13.007428 systemd[1]: Created slice kubepods-besteffort-pod03738109_4229_4c74_be2d_298cc7e356bf.slice - libcontainer container kubepods-besteffort-pod03738109_4229_4c74_be2d_298cc7e356bf.slice. Apr 13 19:25:13.013959 kubelet[3431]: I0413 19:25:12.929243 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/03738109-4229-4c74-be2d-298cc7e356bf-nginx-config\") pod \"whisker-775444d9b7-dqwmj\" (UID: \"03738109-4229-4c74-be2d-298cc7e356bf\") " pod="calico-system/whisker-775444d9b7-dqwmj" Apr 13 19:25:13.013959 kubelet[3431]: I0413 19:25:12.929321 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03738109-4229-4c74-be2d-298cc7e356bf-whisker-backend-key-pair\") pod \"whisker-775444d9b7-dqwmj\" (UID: \"03738109-4229-4c74-be2d-298cc7e356bf\") " pod="calico-system/whisker-775444d9b7-dqwmj" Apr 13 19:25:13.013959 kubelet[3431]: I0413 19:25:12.929388 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03738109-4229-4c74-be2d-298cc7e356bf-whisker-ca-bundle\") pod \"whisker-775444d9b7-dqwmj\" (UID: \"03738109-4229-4c74-be2d-298cc7e356bf\") " pod="calico-system/whisker-775444d9b7-dqwmj" Apr 13 19:25:13.013959 kubelet[3431]: I0413 19:25:12.931797 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76hqq\" (UniqueName: \"kubernetes.io/projected/03738109-4229-4c74-be2d-298cc7e356bf-kube-api-access-76hqq\") pod \"whisker-775444d9b7-dqwmj\" (UID: \"03738109-4229-4c74-be2d-298cc7e356bf\") " pod="calico-system/whisker-775444d9b7-dqwmj" Apr 13 19:25:13.081114 (udev-worker)[4935]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:13.085626 systemd-networkd[1936]: calid360f7e92f6: Link UP Apr 13 19:25:13.089828 systemd-networkd[1936]: calid360f7e92f6: Gained carrier Apr 13 19:25:13.167319 kubelet[3431]: I0413 19:25:13.167245 3431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2180400b-1490-4820-88c4-4c8f7911915a" path="/var/lib/kubelet/pods/2180400b-1490-4820-88c4-4c8f7911915a/volumes" Apr 13 19:25:13.169992 systemd-networkd[1936]: califd029b8658f: Link UP Apr 13 19:25:13.180346 systemd-networkd[1936]: califd029b8658f: Gained carrier Apr 13 19:25:13.196579 containerd[2017]: time="2026-04-13T19:25:13.195073881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:13.203502 containerd[2017]: time="2026-04-13T19:25:13.202843917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:13.203502 containerd[2017]: time="2026-04-13T19:25:13.203189637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.206527 containerd[2017]: time="2026-04-13T19:25:13.205296837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.147 [ERROR][4804] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.276 [INFO][4804] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0 goldmane-5b85766d88- calico-system ca478177-20bf-4954-9621-ef6793bbf95a 939 0 2026-04-13 19:24:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-27-52 goldmane-5b85766d88-b6rr7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid360f7e92f6 [] [] }} ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.279 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.595 [INFO][4875] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" HandleID="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.686 [INFO][4875] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" HandleID="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-52", "pod":"goldmane-5b85766d88-b6rr7", "timestamp":"2026-04-13 19:25:12.595216426 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001e66e0)} Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.686 [INFO][4875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.710 [INFO][4875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.710 [INFO][4875] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.728 [INFO][4875] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.784 [INFO][4875] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.912 [INFO][4875] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.950 [INFO][4875] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.963 [INFO][4875] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.963 [INFO][4875] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.968 [INFO][4875] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.980 [INFO][4875] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.989 [INFO][4875] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.2/26] block=192.168.5.0/26 handle="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.989 [INFO][4875] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.2/26] handle="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" host="ip-172-31-27-52" Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.989 [INFO][4875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.214115 containerd[2017]: 2026-04-13 19:25:12.989 [INFO][4875] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.2/26] IPv6=[] ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" HandleID="k8s-pod-network.c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:13.215252 containerd[2017]: 2026-04-13 19:25:12.996 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ca478177-20bf-4954-9621-ef6793bbf95a", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"goldmane-5b85766d88-b6rr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid360f7e92f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.215252 containerd[2017]: 2026-04-13 19:25:12.996 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.2/32] ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:13.215252 containerd[2017]: 2026-04-13 19:25:12.996 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid360f7e92f6 ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:13.215252 containerd[2017]: 2026-04-13 19:25:13.106 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:13.215252 containerd[2017]: 2026-04-13 19:25:13.112 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ca478177-20bf-4954-9621-ef6793bbf95a", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f", Pod:"goldmane-5b85766d88-b6rr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid360f7e92f6", MAC:"8e:e6:62:b1:8c:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.215252 containerd[2017]: 2026-04-13 19:25:13.158 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f" Namespace="calico-system" Pod="goldmane-5b85766d88-b6rr7" WorkloadEndpoint="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.191 [ERROR][4786] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.335 [INFO][4786] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0 coredns-674b8bbfcf- kube-system d9085047-db0c-44e3-8b9c-c8fdeea9cd63 938 0 2026-04-13 19:24:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-52 coredns-674b8bbfcf-nbvwd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd029b8658f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.335 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.592 [INFO][4880] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" HandleID="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.690 [INFO][4880] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" HandleID="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bd400), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-52", "pod":"coredns-674b8bbfcf-nbvwd", "timestamp":"2026-04-13 19:25:12.592040902 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400002c000)} Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.691 [INFO][4880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.991 [INFO][4880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.991 [INFO][4880] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:12.999 [INFO][4880] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.017 [INFO][4880] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.037 [INFO][4880] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.043 [INFO][4880] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.065 [INFO][4880] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.065 [INFO][4880] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.073 [INFO][4880] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14 Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.102 [INFO][4880] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.137 [INFO][4880] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.3/26] block=192.168.5.0/26 handle="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.137 [INFO][4880] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.3/26] handle="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" host="ip-172-31-27-52" Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.137 [INFO][4880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.287439 containerd[2017]: 2026-04-13 19:25:13.137 [INFO][4880] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.3/26] IPv6=[] ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" HandleID="k8s-pod-network.0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:13.291188 containerd[2017]: 2026-04-13 19:25:13.146 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9085047-db0c-44e3-8b9c-c8fdeea9cd63", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"coredns-674b8bbfcf-nbvwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd029b8658f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.291188 containerd[2017]: 2026-04-13 19:25:13.146 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.3/32] ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:13.291188 containerd[2017]: 2026-04-13 19:25:13.146 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd029b8658f ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:13.291188 containerd[2017]: 2026-04-13 19:25:13.191 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:13.291188 containerd[2017]: 2026-04-13 19:25:13.202 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9085047-db0c-44e3-8b9c-c8fdeea9cd63", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14", Pod:"coredns-674b8bbfcf-nbvwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd029b8658f", MAC:"16:13:bf:a4:af:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.291188 containerd[2017]: 2026-04-13 19:25:13.265 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbvwd" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:13.323952 containerd[2017]: time="2026-04-13T19:25:13.323275078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775444d9b7-dqwmj,Uid:03738109-4229-4c74-be2d-298cc7e356bf,Namespace:calico-system,Attempt:0,}" Apr 13 19:25:13.364156 systemd[1]: Started cri-containerd-fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e.scope - libcontainer container fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e. Apr 13 19:25:13.379440 systemd-networkd[1936]: calibd66a6f37d3: Link UP Apr 13 19:25:13.379960 systemd-networkd[1936]: calibd66a6f37d3: Gained carrier Apr 13 19:25:13.474590 containerd[2017]: time="2026-04-13T19:25:13.464726891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:13.474590 containerd[2017]: time="2026-04-13T19:25:13.464847515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:13.474590 containerd[2017]: time="2026-04-13T19:25:13.464891579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.474590 containerd[2017]: time="2026-04-13T19:25:13.465063635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:12.247 [ERROR][4815] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:12.344 [INFO][4815] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0 calico-apiserver-648588977d- calico-system 1adedd17-1d2f-4205-a531-c8bdcaf6fdc9 943 0 2026-04-13 19:24:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:648588977d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-52 calico-apiserver-648588977d-fkz57 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibd66a6f37d3 [] [] }} ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:12.345 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:12.634 [INFO][4887] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" HandleID="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:12.692 [INFO][4887] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" HandleID="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120320), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-52", "pod":"calico-apiserver-648588977d-fkz57", "timestamp":"2026-04-13 19:25:12.634796207 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400048d340)} Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:12.693 [INFO][4887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.143 [INFO][4887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.143 [INFO][4887] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.153 [INFO][4887] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.187 [INFO][4887] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.219 [INFO][4887] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.240 [INFO][4887] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.246 [INFO][4887] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.246 [INFO][4887] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.250 [INFO][4887] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36 Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.275 [INFO][4887] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.296 [INFO][4887] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.4/26] block=192.168.5.0/26 handle="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.297 [INFO][4887] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.4/26] handle="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" host="ip-172-31-27-52" Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.297 [INFO][4887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.474590 containerd[2017]: 2026-04-13 19:25:13.297 [INFO][4887] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.4/26] IPv6=[] ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" HandleID="k8s-pod-network.9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:13.481025 containerd[2017]: 2026-04-13 19:25:13.338 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"1adedd17-1d2f-4205-a531-c8bdcaf6fdc9", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"calico-apiserver-648588977d-fkz57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd66a6f37d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.481025 containerd[2017]: 2026-04-13 19:25:13.338 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.4/32] ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:13.481025 containerd[2017]: 2026-04-13 19:25:13.338 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd66a6f37d3 ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:13.481025 containerd[2017]: 2026-04-13 19:25:13.388 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:13.481025 containerd[2017]: 2026-04-13 19:25:13.403 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"1adedd17-1d2f-4205-a531-c8bdcaf6fdc9", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36", Pod:"calico-apiserver-648588977d-fkz57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd66a6f37d3", MAC:"d2:dd:0f:43:67:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.481025 containerd[2017]: 2026-04-13 19:25:13.451 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36" Namespace="calico-system" Pod="calico-apiserver-648588977d-fkz57" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:13.540631 systemd-networkd[1936]: cali73f0c20ed9f: Link UP Apr 13 19:25:13.551290 systemd-networkd[1936]: cali73f0c20ed9f: Gained carrier Apr 13 19:25:13.568555 containerd[2017]: time="2026-04-13T19:25:13.567122819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:13.568555 containerd[2017]: time="2026-04-13T19:25:13.567248879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:13.568555 containerd[2017]: time="2026-04-13T19:25:13.567290219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.568555 containerd[2017]: time="2026-04-13T19:25:13.567470687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:12.288 [ERROR][4828] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:12.347 [INFO][4828] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0 coredns-674b8bbfcf- kube-system 31e132ac-0e5d-4c85-a51c-36b4b5148995 937 0 2026-04-13 19:24:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-52 coredns-674b8bbfcf-gtvnr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73f0c20ed9f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:12.347 [INFO][4828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:12.695 [INFO][4884] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" HandleID="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:12.829 [INFO][4884] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" HandleID="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-52", "pod":"coredns-674b8bbfcf-gtvnr", "timestamp":"2026-04-13 19:25:12.695011475 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c8f20)} Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:12.829 [INFO][4884] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.297 [INFO][4884] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.297 [INFO][4884] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.308 [INFO][4884] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.336 [INFO][4884] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.389 [INFO][4884] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.405 [INFO][4884] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.420 [INFO][4884] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.421 [INFO][4884] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.433 [INFO][4884] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210 Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.452 [INFO][4884] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.476 [INFO][4884] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.5/26] block=192.168.5.0/26 handle="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.476 [INFO][4884] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.5/26] handle="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" host="ip-172-31-27-52" Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.476 [INFO][4884] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.655876 containerd[2017]: 2026-04-13 19:25:13.476 [INFO][4884] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.5/26] IPv6=[] ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" HandleID="k8s-pod-network.f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:13.658922 containerd[2017]: 2026-04-13 19:25:13.500 [INFO][4828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"31e132ac-0e5d-4c85-a51c-36b4b5148995", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"coredns-674b8bbfcf-gtvnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f0c20ed9f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.658922 containerd[2017]: 2026-04-13 19:25:13.500 [INFO][4828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.5/32] ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:13.658922 containerd[2017]: 2026-04-13 19:25:13.500 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73f0c20ed9f ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:13.658922 containerd[2017]: 2026-04-13 19:25:13.555 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:13.658922 containerd[2017]: 2026-04-13 19:25:13.570 [INFO][4828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"31e132ac-0e5d-4c85-a51c-36b4b5148995", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210", Pod:"coredns-674b8bbfcf-gtvnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f0c20ed9f", MAC:"b2:5a:cc:b6:ac:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.658922 containerd[2017]: 2026-04-13 19:25:13.634 [INFO][4828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtvnr" WorkloadEndpoint="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:13.750223 systemd[1]: Started cri-containerd-c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f.scope - libcontainer container c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f. Apr 13 19:25:13.780780 systemd-networkd[1936]: cali7c06c7a2186: Link UP Apr 13 19:25:13.792298 systemd[1]: run-containerd-runc-k8s.io-0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14-runc.5JjnCV.mount: Deactivated successfully. Apr 13 19:25:13.797974 systemd-networkd[1936]: cali7c06c7a2186: Gained carrier Apr 13 19:25:13.818854 containerd[2017]: time="2026-04-13T19:25:13.818659944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:13.820272 containerd[2017]: time="2026-04-13T19:25:13.818812932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:13.820272 containerd[2017]: time="2026-04-13T19:25:13.818852856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.820272 containerd[2017]: time="2026-04-13T19:25:13.819027324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:13.832851 systemd[1]: Started cri-containerd-0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14.scope - libcontainer container 0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14. Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:12.392 [ERROR][4842] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:12.466 [INFO][4842] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0 calico-kube-controllers-6dfbd8bf44- calico-system 856ccb99-242b-42ff-a8da-016e9416c1be 945 0 2026-04-13 19:24:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dfbd8bf44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-52 calico-kube-controllers-6dfbd8bf44-pjxdt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c06c7a2186 [] [] }} ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:12.468 [INFO][4842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:12.798 [INFO][4901] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" HandleID="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:12.921 [INFO][4901] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" HandleID="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e9810), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-52", "pod":"calico-kube-controllers-6dfbd8bf44-pjxdt", "timestamp":"2026-04-13 19:25:12.798177971 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002dc000)} Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:12.923 [INFO][4901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.479 [INFO][4901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.481 [INFO][4901] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.492 [INFO][4901] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.517 [INFO][4901] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.561 [INFO][4901] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.572 [INFO][4901] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.582 [INFO][4901] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.586 [INFO][4901] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.593 [INFO][4901] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.621 [INFO][4901] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.655 [INFO][4901] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.6/26] block=192.168.5.0/26 handle="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.658 [INFO][4901] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.6/26] handle="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" host="ip-172-31-27-52" Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.659 [INFO][4901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:13.926353 containerd[2017]: 2026-04-13 19:25:13.659 [INFO][4901] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.6/26] IPv6=[] ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" HandleID="k8s-pod-network.49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:13.929954 containerd[2017]: 2026-04-13 19:25:13.696 [INFO][4842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0", GenerateName:"calico-kube-controllers-6dfbd8bf44-", Namespace:"calico-system", SelfLink:"", UID:"856ccb99-242b-42ff-a8da-016e9416c1be", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfbd8bf44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"calico-kube-controllers-6dfbd8bf44-pjxdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c06c7a2186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.929954 containerd[2017]: 2026-04-13 19:25:13.696 [INFO][4842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.6/32] ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:13.929954 containerd[2017]: 2026-04-13 19:25:13.696 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c06c7a2186 ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:13.929954 containerd[2017]: 2026-04-13 19:25:13.832 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:13.929954 containerd[2017]: 2026-04-13 19:25:13.840 [INFO][4842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0", GenerateName:"calico-kube-controllers-6dfbd8bf44-", Namespace:"calico-system", SelfLink:"", UID:"856ccb99-242b-42ff-a8da-016e9416c1be", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfbd8bf44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e", Pod:"calico-kube-controllers-6dfbd8bf44-pjxdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c06c7a2186", MAC:"e6:6c:65:91:87:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:13.929954 containerd[2017]: 2026-04-13 19:25:13.912 [INFO][4842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e" Namespace="calico-system" Pod="calico-kube-controllers-6dfbd8bf44-pjxdt" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:13.937025 containerd[2017]: time="2026-04-13T19:25:13.936957541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrfts,Uid:90202ccf-b846-4ffb-bfc4-994f0a0246ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e\"" Apr 13 19:25:13.944855 containerd[2017]: time="2026-04-13T19:25:13.944775649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 19:25:14.030855 systemd-networkd[1936]: cali32d23f5615d: Link UP Apr 13 19:25:14.061299 systemd[1]: run-containerd-runc-k8s.io-9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36-runc.BCTukk.mount: Deactivated successfully. Apr 13 19:25:14.066364 systemd-networkd[1936]: cali32d23f5615d: Gained carrier Apr 13 19:25:14.088494 containerd[2017]: time="2026-04-13T19:25:14.087508594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:14.088494 containerd[2017]: time="2026-04-13T19:25:14.087600058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:14.089034 systemd[1]: Started cri-containerd-9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36.scope - libcontainer container 9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36. Apr 13 19:25:14.095130 containerd[2017]: time="2026-04-13T19:25:14.092000158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:14.115208 containerd[2017]: time="2026-04-13T19:25:14.099549250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:12.415 [ERROR][4840] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:12.485 [INFO][4840] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0 calico-apiserver-648588977d- calico-system 519803bd-aa51-492a-ba0b-1cc7713863b8 942 0 2026-04-13 19:24:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:648588977d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-52 calico-apiserver-648588977d-qbzbd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali32d23f5615d [] [] }} ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:12.486 [INFO][4840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:12.934 [INFO][4906] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" HandleID="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:12.965 [INFO][4906] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" HandleID="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120b50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-52", "pod":"calico-apiserver-648588977d-qbzbd", "timestamp":"2026-04-13 19:25:12.934118016 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400028e420)} Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:12.966 [INFO][4906] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.659 [INFO][4906] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.661 [INFO][4906] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.669 [INFO][4906] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.691 [INFO][4906] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.800 [INFO][4906] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.827 [INFO][4906] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.849 [INFO][4906] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.849 [INFO][4906] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.865 [INFO][4906] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.892 [INFO][4906] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.949 [INFO][4906] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.7/26] block=192.168.5.0/26 handle="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.949 [INFO][4906] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.7/26] handle="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" host="ip-172-31-27-52" Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.950 [INFO][4906] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:14.169868 containerd[2017]: 2026-04-13 19:25:13.950 [INFO][4906] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.7/26] IPv6=[] ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" HandleID="k8s-pod-network.0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:14.171290 containerd[2017]: 2026-04-13 19:25:13.982 [INFO][4840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"519803bd-aa51-492a-ba0b-1cc7713863b8", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"calico-apiserver-648588977d-qbzbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32d23f5615d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.171290 containerd[2017]: 2026-04-13 19:25:13.982 [INFO][4840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.7/32] ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:14.171290 containerd[2017]: 2026-04-13 19:25:13.982 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32d23f5615d ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:14.171290 containerd[2017]: 2026-04-13 19:25:14.087 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:14.171290 containerd[2017]: 2026-04-13 19:25:14.101 [INFO][4840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"519803bd-aa51-492a-ba0b-1cc7713863b8", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae", Pod:"calico-apiserver-648588977d-qbzbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32d23f5615d", MAC:"52:be:97:2d:08:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.171290 containerd[2017]: 2026-04-13 19:25:14.149 [INFO][4840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae" Namespace="calico-system" Pod="calico-apiserver-648588977d-qbzbd" WorkloadEndpoint="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:14.257709 containerd[2017]: time="2026-04-13T19:25:14.257134751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbvwd,Uid:d9085047-db0c-44e3-8b9c-c8fdeea9cd63,Namespace:kube-system,Attempt:1,} returns sandbox id \"0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14\"" Apr 13 19:25:14.284587 containerd[2017]: time="2026-04-13T19:25:14.284264891Z" level=info msg="CreateContainer within sandbox \"0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:14.324346 systemd[1]: Started cri-containerd-f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210.scope - libcontainer container f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210. Apr 13 19:25:14.350088 containerd[2017]: time="2026-04-13T19:25:14.347547443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:14.351789 containerd[2017]: time="2026-04-13T19:25:14.350641991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:14.355869 containerd[2017]: time="2026-04-13T19:25:14.352075883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:14.355869 containerd[2017]: time="2026-04-13T19:25:14.352579871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:14.359125 containerd[2017]: time="2026-04-13T19:25:14.359075363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-b6rr7,Uid:ca478177-20bf-4954-9621-ef6793bbf95a,Namespace:calico-system,Attempt:1,} returns sandbox id \"c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f\"" Apr 13 19:25:14.396879 systemd-networkd[1936]: califd029b8658f: Gained IPv6LL Apr 13 19:25:14.404788 containerd[2017]: time="2026-04-13T19:25:14.404711123Z" level=info msg="CreateContainer within sandbox \"0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0700661ba09c9bb4099b570072bfd3385a4a86c3751d45fe1ef8a6883b0370a\"" Apr 13 19:25:14.411498 containerd[2017]: time="2026-04-13T19:25:14.410942159Z" level=info msg="StartContainer for \"d0700661ba09c9bb4099b570072bfd3385a4a86c3751d45fe1ef8a6883b0370a\"" Apr 13 19:25:14.475720 systemd[1]: Started cri-containerd-49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e.scope - libcontainer container 49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e. Apr 13 19:25:14.512611 containerd[2017]: time="2026-04-13T19:25:14.510999804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:14.512611 containerd[2017]: time="2026-04-13T19:25:14.511113684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:14.512611 containerd[2017]: time="2026-04-13T19:25:14.511150584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:14.519366 containerd[2017]: time="2026-04-13T19:25:14.511322328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:14.589472 containerd[2017]: time="2026-04-13T19:25:14.588374928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtvnr,Uid:31e132ac-0e5d-4c85-a51c-36b4b5148995,Namespace:kube-system,Attempt:1,} returns sandbox id \"f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210\"" Apr 13 19:25:14.624822 systemd[1]: Started cri-containerd-0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae.scope - libcontainer container 0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae. Apr 13 19:25:14.628203 containerd[2017]: time="2026-04-13T19:25:14.627786228Z" level=info msg="CreateContainer within sandbox \"f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:14.728928 containerd[2017]: time="2026-04-13T19:25:14.728848429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-fkz57,Uid:1adedd17-1d2f-4205-a531-c8bdcaf6fdc9,Namespace:calico-system,Attempt:1,} returns sandbox id \"9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36\"" Apr 13 19:25:14.779816 systemd-networkd[1936]: calie7e07966b60: Link UP Apr 13 19:25:14.784339 systemd-networkd[1936]: calie7e07966b60: Gained carrier Apr 13 19:25:14.807171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210978729.mount: Deactivated successfully. Apr 13 19:25:14.839138 systemd[1]: Started cri-containerd-d0700661ba09c9bb4099b570072bfd3385a4a86c3751d45fe1ef8a6883b0370a.scope - libcontainer container d0700661ba09c9bb4099b570072bfd3385a4a86c3751d45fe1ef8a6883b0370a. Apr 13 19:25:14.847039 systemd-networkd[1936]: cali7c06c7a2186: Gained IPv6LL Apr 13 19:25:14.849277 systemd-networkd[1936]: cali3da603a8dea: Gained IPv6LL Apr 13 19:25:14.860751 containerd[2017]: time="2026-04-13T19:25:14.860682302Z" level=info msg="CreateContainer within sandbox \"f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73637a5d04959969927f766b50d184ba8c9dd23874c961fff78e6018d2bb4863\"" Apr 13 19:25:14.863481 containerd[2017]: time="2026-04-13T19:25:14.862051334Z" level=info msg="StartContainer for \"73637a5d04959969927f766b50d184ba8c9dd23874c961fff78e6018d2bb4863\"" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:13.850 [ERROR][5050] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:13.959 [INFO][5050] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0 whisker-775444d9b7- calico-system 03738109-4229-4c74-be2d-298cc7e356bf 970 0 2026-04-13 19:25:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:775444d9b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-27-52 whisker-775444d9b7-dqwmj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie7e07966b60 [] [] }} ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:13.964 [INFO][5050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.436 [INFO][5223] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" HandleID="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Workload="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.511 [INFO][5223] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" HandleID="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Workload="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400019d9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-52", "pod":"whisker-775444d9b7-dqwmj", "timestamp":"2026-04-13 19:25:14.436884324 +0000 UTC"}, Hostname:"ip-172-31-27-52", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003df4a0)} Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.512 [INFO][5223] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.512 [INFO][5223] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.512 [INFO][5223] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-52' Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.527 [INFO][5223] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.547 [INFO][5223] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.574 [INFO][5223] ipam/ipam.go 526: Trying affinity for 192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.583 [INFO][5223] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.610 [INFO][5223] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.610 [INFO][5223] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.623 [INFO][5223] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862 Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.648 [INFO][5223] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.678 [INFO][5223] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.8/26] block=192.168.5.0/26 handle="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.678 [INFO][5223] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.8/26] handle="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" host="ip-172-31-27-52" Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.678 [INFO][5223] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:14.888236 containerd[2017]: 2026-04-13 19:25:14.678 [INFO][5223] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.8/26] IPv6=[] ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" HandleID="k8s-pod-network.e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Workload="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" Apr 13 19:25:14.892944 containerd[2017]: 2026-04-13 19:25:14.710 [INFO][5050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0", GenerateName:"whisker-775444d9b7-", Namespace:"calico-system", SelfLink:"", UID:"03738109-4229-4c74-be2d-298cc7e356bf", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775444d9b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"", Pod:"whisker-775444d9b7-dqwmj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie7e07966b60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.892944 containerd[2017]: 2026-04-13 19:25:14.714 [INFO][5050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.8/32] ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" Apr 13 19:25:14.892944 containerd[2017]: 2026-04-13 19:25:14.716 [INFO][5050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7e07966b60 ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" Apr 13 19:25:14.892944 containerd[2017]: 2026-04-13 19:25:14.794 [INFO][5050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" Apr 13 19:25:14.892944 containerd[2017]: 2026-04-13 19:25:14.813 [INFO][5050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0", GenerateName:"whisker-775444d9b7-", Namespace:"calico-system", SelfLink:"", UID:"03738109-4229-4c74-be2d-298cc7e356bf", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775444d9b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862", Pod:"whisker-775444d9b7-dqwmj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie7e07966b60", MAC:"86:de:4b:b7:9c:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:14.892944 containerd[2017]: 2026-04-13 19:25:14.853 [INFO][5050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862" Namespace="calico-system" Pod="whisker-775444d9b7-dqwmj" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--775444d9b7--dqwmj-eth0" Apr 13 19:25:14.973876 systemd-networkd[1936]: calid360f7e92f6: Gained IPv6LL Apr 13 19:25:15.038730 systemd-networkd[1936]: cali73f0c20ed9f: Gained IPv6LL Apr 13 19:25:15.038808 systemd[1]: Started cri-containerd-73637a5d04959969927f766b50d184ba8c9dd23874c961fff78e6018d2bb4863.scope - libcontainer container 73637a5d04959969927f766b50d184ba8c9dd23874c961fff78e6018d2bb4863. Apr 13 19:25:15.084993 containerd[2017]: time="2026-04-13T19:25:15.074718683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:15.084993 containerd[2017]: time="2026-04-13T19:25:15.077059379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:15.084993 containerd[2017]: time="2026-04-13T19:25:15.077203883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.084993 containerd[2017]: time="2026-04-13T19:25:15.077894135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:15.093432 containerd[2017]: time="2026-04-13T19:25:15.092817695Z" level=info msg="StartContainer for \"d0700661ba09c9bb4099b570072bfd3385a4a86c3751d45fe1ef8a6883b0370a\" returns successfully" Apr 13 19:25:15.102826 systemd-networkd[1936]: calibd66a6f37d3: Gained IPv6LL Apr 13 19:25:15.142965 containerd[2017]: time="2026-04-13T19:25:15.142623239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dfbd8bf44-pjxdt,Uid:856ccb99-242b-42ff-a8da-016e9416c1be,Namespace:calico-system,Attempt:1,} returns sandbox id \"49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e\"" Apr 13 19:25:15.196823 containerd[2017]: time="2026-04-13T19:25:15.196516655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-648588977d-qbzbd,Uid:519803bd-aa51-492a-ba0b-1cc7713863b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae\"" Apr 13 19:25:15.212824 systemd[1]: Started cri-containerd-e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862.scope - libcontainer container e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862. Apr 13 19:25:15.260375 containerd[2017]: time="2026-04-13T19:25:15.260056824Z" level=info msg="StartContainer for \"73637a5d04959969927f766b50d184ba8c9dd23874c961fff78e6018d2bb4863\" returns successfully" Apr 13 19:25:15.489114 containerd[2017]: time="2026-04-13T19:25:15.489047041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775444d9b7-dqwmj,Uid:03738109-4229-4c74-be2d-298cc7e356bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862\"" Apr 13 19:25:15.549381 systemd-networkd[1936]: cali32d23f5615d: Gained IPv6LL Apr 13 19:25:15.795297 kubelet[3431]: I0413 19:25:15.794988 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nbvwd" podStartSLOduration=47.794946038 podStartE2EDuration="47.794946038s" podCreationTimestamp="2026-04-13 19:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:15.79214057 +0000 UTC m=+54.946079374" watchObservedRunningTime="2026-04-13 19:25:15.794946038 +0000 UTC m=+54.948884746" Apr 13 19:25:15.891940 kubelet[3431]: I0413 19:25:15.891805 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gtvnr" podStartSLOduration=47.891779907 podStartE2EDuration="47.891779907s" podCreationTimestamp="2026-04-13 19:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:15.82957451 +0000 UTC m=+54.983513218" watchObservedRunningTime="2026-04-13 19:25:15.891779907 +0000 UTC m=+55.045718615" Apr 13 19:25:16.140541 kernel: calico-node[4972]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 19:25:16.828784 systemd-networkd[1936]: calie7e07966b60: Gained IPv6LL Apr 13 19:25:16.984995 systemd-networkd[1936]: vxlan.calico: Link UP Apr 13 19:25:16.985016 systemd-networkd[1936]: vxlan.calico: Gained carrier Apr 13 19:25:17.543430 containerd[2017]: time="2026-04-13T19:25:17.542431611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:17.545663 containerd[2017]: time="2026-04-13T19:25:17.545587143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 19:25:17.548382 containerd[2017]: time="2026-04-13T19:25:17.548296059Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:17.557968 containerd[2017]: time="2026-04-13T19:25:17.557181951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:17.562704 containerd[2017]: time="2026-04-13T19:25:17.561357507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.616501482s" Apr 13 19:25:17.562704 containerd[2017]: time="2026-04-13T19:25:17.561422463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 19:25:17.565565 containerd[2017]: time="2026-04-13T19:25:17.564686835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 19:25:17.574728 containerd[2017]: time="2026-04-13T19:25:17.574658931Z" level=info msg="CreateContainer within sandbox \"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 19:25:17.625711 containerd[2017]: time="2026-04-13T19:25:17.625627611Z" level=info msg="CreateContainer within sandbox \"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1eb8d9c180d3f88f6b09df73fd6d97e4b4ed1285f8d5466ff65fadd17b6548e5\"" Apr 13 19:25:17.628829 containerd[2017]: time="2026-04-13T19:25:17.628341927Z" level=info msg="StartContainer for \"1eb8d9c180d3f88f6b09df73fd6d97e4b4ed1285f8d5466ff65fadd17b6548e5\"" Apr 13 19:25:17.794904 systemd[1]: Started cri-containerd-1eb8d9c180d3f88f6b09df73fd6d97e4b4ed1285f8d5466ff65fadd17b6548e5.scope - libcontainer container 1eb8d9c180d3f88f6b09df73fd6d97e4b4ed1285f8d5466ff65fadd17b6548e5. Apr 13 19:25:17.860417 containerd[2017]: time="2026-04-13T19:25:17.860300345Z" level=info msg="StartContainer for \"1eb8d9c180d3f88f6b09df73fd6d97e4b4ed1285f8d5466ff65fadd17b6548e5\" returns successfully" Apr 13 19:25:18.622255 systemd-networkd[1936]: vxlan.calico: Gained IPv6LL Apr 13 19:25:19.740726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792744305.mount: Deactivated successfully. Apr 13 19:25:20.354843 containerd[2017]: time="2026-04-13T19:25:20.354773189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.357294 containerd[2017]: time="2026-04-13T19:25:20.357232481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 19:25:20.359475 containerd[2017]: time="2026-04-13T19:25:20.359086577Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.364857 containerd[2017]: time="2026-04-13T19:25:20.364789121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:20.366720 containerd[2017]: time="2026-04-13T19:25:20.366524261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.801776334s" Apr 13 19:25:20.366720 containerd[2017]: time="2026-04-13T19:25:20.366579785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 19:25:20.371226 containerd[2017]: time="2026-04-13T19:25:20.371062277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:25:20.377037 containerd[2017]: time="2026-04-13T19:25:20.376955993Z" level=info msg="CreateContainer within sandbox \"c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 19:25:20.408269 containerd[2017]: time="2026-04-13T19:25:20.408146201Z" level=info msg="CreateContainer within sandbox \"c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"68ed4d18ab44f0fb0580761b27e8f7fc33ced936a5a6a3cbc0ff9fd0803624a1\"" Apr 13 19:25:20.408986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582213610.mount: Deactivated successfully. Apr 13 19:25:20.414413 containerd[2017]: time="2026-04-13T19:25:20.412650245Z" level=info msg="StartContainer for \"68ed4d18ab44f0fb0580761b27e8f7fc33ced936a5a6a3cbc0ff9fd0803624a1\"" Apr 13 19:25:20.488787 systemd[1]: Started cri-containerd-68ed4d18ab44f0fb0580761b27e8f7fc33ced936a5a6a3cbc0ff9fd0803624a1.scope - libcontainer container 68ed4d18ab44f0fb0580761b27e8f7fc33ced936a5a6a3cbc0ff9fd0803624a1. Apr 13 19:25:20.564669 containerd[2017]: time="2026-04-13T19:25:20.564525858Z" level=info msg="StartContainer for \"68ed4d18ab44f0fb0580761b27e8f7fc33ced936a5a6a3cbc0ff9fd0803624a1\" returns successfully" Apr 13 19:25:21.085533 containerd[2017]: time="2026-04-13T19:25:21.085425005Z" level=info msg="StopPodSandbox for \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\"" Apr 13 19:25:21.182863 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.5.0:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.5.0:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 8 cali3da603a8dea [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 9 calid360f7e92f6 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 10 califd029b8658f [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 11 calibd66a6f37d3 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 12 cali73f0c20ed9f [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 13 cali7c06c7a2186 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 14 cali32d23f5615d [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 15 calie7e07966b60 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 19:25:21.184232 ntpd[1990]: 13 Apr 19:25:21 ntpd[1990]: Listen normally on 16 vxlan.calico [fe80::64c1:dff:fe93:d8fc%12]:123 Apr 13 19:25:21.183013 ntpd[1990]: Listen normally on 8 cali3da603a8dea [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 19:25:21.183106 ntpd[1990]: Listen normally on 9 calid360f7e92f6 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 19:25:21.183175 ntpd[1990]: Listen normally on 10 califd029b8658f [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 19:25:21.183244 ntpd[1990]: Listen normally on 11 calibd66a6f37d3 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 19:25:21.183310 ntpd[1990]: Listen normally on 12 cali73f0c20ed9f [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 19:25:21.183376 ntpd[1990]: Listen normally on 13 cali7c06c7a2186 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 19:25:21.183443 ntpd[1990]: Listen normally on 14 cali32d23f5615d [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 19:25:21.183550 ntpd[1990]: Listen normally on 15 calie7e07966b60 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 19:25:21.183622 ntpd[1990]: Listen normally on 16 vxlan.calico [fe80::64c1:dff:fe93:d8fc%12]:123 Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.162 [WARNING][5754] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9085047-db0c-44e3-8b9c-c8fdeea9cd63", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14", Pod:"coredns-674b8bbfcf-nbvwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd029b8658f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.162 [INFO][5754] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.162 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" iface="eth0" netns="" Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.162 [INFO][5754] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.162 [INFO][5754] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.218 [INFO][5763] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.218 [INFO][5763] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.219 [INFO][5763] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.235 [WARNING][5763] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.236 [INFO][5763] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.238 [INFO][5763] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:21.245649 containerd[2017]: 2026-04-13 19:25:21.241 [INFO][5754] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.246592 containerd[2017]: time="2026-04-13T19:25:21.246536033Z" level=info msg="TearDown network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\" successfully" Apr 13 19:25:21.246723 containerd[2017]: time="2026-04-13T19:25:21.246618845Z" level=info msg="StopPodSandbox for \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\" returns successfully" Apr 13 19:25:21.247775 containerd[2017]: time="2026-04-13T19:25:21.247709417Z" level=info msg="RemovePodSandbox for \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\"" Apr 13 19:25:21.247775 containerd[2017]: time="2026-04-13T19:25:21.247771673Z" level=info msg="Forcibly stopping sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\"" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.333 [WARNING][5779] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9085047-db0c-44e3-8b9c-c8fdeea9cd63", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"0aeb66b2315545f85921a4e3a4658babe2b27972b3dfa28ac3ebefb7d3bbfd14", Pod:"coredns-674b8bbfcf-nbvwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd029b8658f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.333 [INFO][5779] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.333 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" iface="eth0" netns="" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.333 [INFO][5779] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.333 [INFO][5779] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.376 [INFO][5787] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.376 [INFO][5787] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.376 [INFO][5787] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.391 [WARNING][5787] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.392 [INFO][5787] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" HandleID="k8s-pod-network.876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--nbvwd-eth0" Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.395 [INFO][5787] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:21.404116 containerd[2017]: 2026-04-13 19:25:21.400 [INFO][5779] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f" Apr 13 19:25:21.407295 containerd[2017]: time="2026-04-13T19:25:21.404594418Z" level=info msg="TearDown network for sandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\" successfully" Apr 13 19:25:21.414657 containerd[2017]: time="2026-04-13T19:25:21.414581142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:21.414825 containerd[2017]: time="2026-04-13T19:25:21.414688122Z" level=info msg="RemovePodSandbox \"876478e936699b5b20ee200c7de6d3ea57750c66e01242d90872d2760993f29f\" returns successfully" Apr 13 19:25:21.415839 containerd[2017]: time="2026-04-13T19:25:21.415768242Z" level=info msg="StopPodSandbox for \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\"" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.484 [WARNING][5801] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0", GenerateName:"calico-kube-controllers-6dfbd8bf44-", Namespace:"calico-system", SelfLink:"", UID:"856ccb99-242b-42ff-a8da-016e9416c1be", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfbd8bf44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e", Pod:"calico-kube-controllers-6dfbd8bf44-pjxdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c06c7a2186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.485 [INFO][5801] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.485 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" iface="eth0" netns="" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.485 [INFO][5801] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.485 [INFO][5801] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.527 [INFO][5808] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.527 [INFO][5808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.527 [INFO][5808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.544 [WARNING][5808] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.544 [INFO][5808] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.546 [INFO][5808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:21.553662 containerd[2017]: 2026-04-13 19:25:21.549 [INFO][5801] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.553662 containerd[2017]: time="2026-04-13T19:25:21.553027015Z" level=info msg="TearDown network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\" successfully" Apr 13 19:25:21.553662 containerd[2017]: time="2026-04-13T19:25:21.553064011Z" level=info msg="StopPodSandbox for \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\" returns successfully" Apr 13 19:25:21.555981 containerd[2017]: time="2026-04-13T19:25:21.555112339Z" level=info msg="RemovePodSandbox for \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\"" Apr 13 19:25:21.555981 containerd[2017]: time="2026-04-13T19:25:21.555165331Z" level=info msg="Forcibly stopping sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\"" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.674 [WARNING][5822] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0", GenerateName:"calico-kube-controllers-6dfbd8bf44-", Namespace:"calico-system", SelfLink:"", UID:"856ccb99-242b-42ff-a8da-016e9416c1be", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dfbd8bf44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e", Pod:"calico-kube-controllers-6dfbd8bf44-pjxdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c06c7a2186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.675 [INFO][5822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.675 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" iface="eth0" netns="" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.675 [INFO][5822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.675 [INFO][5822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.716 [INFO][5832] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.716 [INFO][5832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.716 [INFO][5832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.731 [WARNING][5832] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.732 [INFO][5832] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" HandleID="k8s-pod-network.9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Workload="ip--172--31--27--52-k8s-calico--kube--controllers--6dfbd8bf44--pjxdt-eth0" Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.734 [INFO][5832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:21.741547 containerd[2017]: 2026-04-13 19:25:21.737 [INFO][5822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715" Apr 13 19:25:21.743699 containerd[2017]: time="2026-04-13T19:25:21.743622872Z" level=info msg="TearDown network for sandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\" successfully" Apr 13 19:25:21.750474 containerd[2017]: time="2026-04-13T19:25:21.750385016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:21.750708 containerd[2017]: time="2026-04-13T19:25:21.750639704Z" level=info msg="RemovePodSandbox \"9bcbbc279366754fb86ee75391a1315fe42b65f966b75a1ed85b634ed2a56715\" returns successfully" Apr 13 19:25:21.751414 containerd[2017]: time="2026-04-13T19:25:21.751351916Z" level=info msg="StopPodSandbox for \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\"" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.822 [WARNING][5846] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"519803bd-aa51-492a-ba0b-1cc7713863b8", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae", Pod:"calico-apiserver-648588977d-qbzbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32d23f5615d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.823 [INFO][5846] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.823 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" iface="eth0" netns="" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.823 [INFO][5846] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.823 [INFO][5846] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.969 [INFO][5853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.970 [INFO][5853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.971 [INFO][5853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.996 [WARNING][5853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.996 [INFO][5853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:21.999 [INFO][5853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:22.012773 containerd[2017]: 2026-04-13 19:25:22.004 [INFO][5846] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.012773 containerd[2017]: time="2026-04-13T19:25:22.011860241Z" level=info msg="TearDown network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\" successfully" Apr 13 19:25:22.012773 containerd[2017]: time="2026-04-13T19:25:22.011901485Z" level=info msg="StopPodSandbox for \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\" returns successfully" Apr 13 19:25:22.015547 containerd[2017]: time="2026-04-13T19:25:22.015367229Z" level=info msg="RemovePodSandbox for \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\"" Apr 13 19:25:22.016135 containerd[2017]: time="2026-04-13T19:25:22.015440765Z" level=info msg="Forcibly stopping sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\"" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.158 [WARNING][5885] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"519803bd-aa51-492a-ba0b-1cc7713863b8", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae", Pod:"calico-apiserver-648588977d-qbzbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32d23f5615d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.158 [INFO][5885] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.158 [INFO][5885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" iface="eth0" netns="" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.158 [INFO][5885] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.158 [INFO][5885] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.212 [INFO][5894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.212 [INFO][5894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.213 [INFO][5894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.226 [WARNING][5894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.226 [INFO][5894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" HandleID="k8s-pod-network.7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--qbzbd-eth0" Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.228 [INFO][5894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:22.235585 containerd[2017]: 2026-04-13 19:25:22.232 [INFO][5885] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8" Apr 13 19:25:22.236339 containerd[2017]: time="2026-04-13T19:25:22.235640838Z" level=info msg="TearDown network for sandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\" successfully" Apr 13 19:25:22.243196 containerd[2017]: time="2026-04-13T19:25:22.243109614Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:22.243610 containerd[2017]: time="2026-04-13T19:25:22.243233766Z" level=info msg="RemovePodSandbox \"7a19ca62b4a65c85874f36574cfaaa5869a5df715608ca9e82d1eca00054eef8\" returns successfully" Apr 13 19:25:22.244169 containerd[2017]: time="2026-04-13T19:25:22.244129326Z" level=info msg="StopPodSandbox for \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\"" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.309 [WARNING][5909] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"90202ccf-b846-4ffb-bfc4-994f0a0246ae", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e", Pod:"csi-node-driver-hrfts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3da603a8dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.310 [INFO][5909] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.310 [INFO][5909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" iface="eth0" netns="" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.310 [INFO][5909] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.310 [INFO][5909] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.356 [INFO][5916] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.356 [INFO][5916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.356 [INFO][5916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.375 [WARNING][5916] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.376 [INFO][5916] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.378 [INFO][5916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:22.386436 containerd[2017]: 2026-04-13 19:25:22.383 [INFO][5909] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.388062 containerd[2017]: time="2026-04-13T19:25:22.387611779Z" level=info msg="TearDown network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\" successfully" Apr 13 19:25:22.388062 containerd[2017]: time="2026-04-13T19:25:22.387658231Z" level=info msg="StopPodSandbox for \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\" returns successfully" Apr 13 19:25:22.389104 containerd[2017]: time="2026-04-13T19:25:22.389038015Z" level=info msg="RemovePodSandbox for \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\"" Apr 13 19:25:22.389235 containerd[2017]: time="2026-04-13T19:25:22.389098831Z" level=info msg="Forcibly stopping sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\"" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.460 [WARNING][5931] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"90202ccf-b846-4ffb-bfc4-994f0a0246ae", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e", Pod:"csi-node-driver-hrfts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3da603a8dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.461 [INFO][5931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.461 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" iface="eth0" netns="" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.461 [INFO][5931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.461 [INFO][5931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.531 [INFO][5939] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.531 [INFO][5939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.531 [INFO][5939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.546 [WARNING][5939] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.546 [INFO][5939] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" HandleID="k8s-pod-network.663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Workload="ip--172--31--27--52-k8s-csi--node--driver--hrfts-eth0" Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.549 [INFO][5939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:22.555585 containerd[2017]: 2026-04-13 19:25:22.552 [INFO][5931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d" Apr 13 19:25:22.556743 containerd[2017]: time="2026-04-13T19:25:22.555627968Z" level=info msg="TearDown network for sandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\" successfully" Apr 13 19:25:22.562100 containerd[2017]: time="2026-04-13T19:25:22.562021712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:22.562699 containerd[2017]: time="2026-04-13T19:25:22.562124204Z" level=info msg="RemovePodSandbox \"663408e65723f49445b6dae842b76fd2e294fc28547a46dbbcd801365438da6d\" returns successfully" Apr 13 19:25:22.563827 containerd[2017]: time="2026-04-13T19:25:22.563586248Z" level=info msg="StopPodSandbox for \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\"" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.637 [WARNING][5953] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ca478177-20bf-4954-9621-ef6793bbf95a", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f", Pod:"goldmane-5b85766d88-b6rr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid360f7e92f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.637 [INFO][5953] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.638 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" iface="eth0" netns="" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.638 [INFO][5953] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.638 [INFO][5953] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.679 [INFO][5960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.679 [INFO][5960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.679 [INFO][5960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.693 [WARNING][5960] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.693 [INFO][5960] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.695 [INFO][5960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:22.702819 containerd[2017]: 2026-04-13 19:25:22.698 [INFO][5953] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.702819 containerd[2017]: time="2026-04-13T19:25:22.702754101Z" level=info msg="TearDown network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\" successfully" Apr 13 19:25:22.702819 containerd[2017]: time="2026-04-13T19:25:22.702792777Z" level=info msg="StopPodSandbox for \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\" returns successfully" Apr 13 19:25:22.706170 containerd[2017]: time="2026-04-13T19:25:22.705340245Z" level=info msg="RemovePodSandbox for \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\"" Apr 13 19:25:22.706170 containerd[2017]: time="2026-04-13T19:25:22.705580113Z" level=info msg="Forcibly stopping sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\"" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.816 [WARNING][5974] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ca478177-20bf-4954-9621-ef6793bbf95a", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"c53dda8a7aec23d2870eaf69ac5ef8528f044f301ffa0afb3310b8785b0f2d9f", Pod:"goldmane-5b85766d88-b6rr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid360f7e92f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.817 [INFO][5974] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.817 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" iface="eth0" netns="" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.817 [INFO][5974] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.817 [INFO][5974] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.892 [INFO][5992] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.893 [INFO][5992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.893 [INFO][5992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.907 [WARNING][5992] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.907 [INFO][5992] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" HandleID="k8s-pod-network.7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Workload="ip--172--31--27--52-k8s-goldmane--5b85766d88--b6rr7-eth0" Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.910 [INFO][5992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:22.918056 containerd[2017]: 2026-04-13 19:25:22.914 [INFO][5974] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc" Apr 13 19:25:22.919544 containerd[2017]: time="2026-04-13T19:25:22.918572446Z" level=info msg="TearDown network for sandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\" successfully" Apr 13 19:25:22.925047 containerd[2017]: time="2026-04-13T19:25:22.924965458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:22.925208 containerd[2017]: time="2026-04-13T19:25:22.925060918Z" level=info msg="RemovePodSandbox \"7a467f9afbcc40a2d1d11f2f27c59300b61a81459672e6f59394c0abf3d0cafc\" returns successfully" Apr 13 19:25:22.926127 containerd[2017]: time="2026-04-13T19:25:22.925836190Z" level=info msg="StopPodSandbox for \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\"" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.000 [WARNING][6008] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"31e132ac-0e5d-4c85-a51c-36b4b5148995", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210", Pod:"coredns-674b8bbfcf-gtvnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f0c20ed9f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.001 [INFO][6008] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.001 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" iface="eth0" netns="" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.001 [INFO][6008] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.001 [INFO][6008] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.049 [INFO][6015] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.049 [INFO][6015] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.049 [INFO][6015] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.063 [WARNING][6015] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.063 [INFO][6015] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.066 [INFO][6015] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.073261 containerd[2017]: 2026-04-13 19:25:23.069 [INFO][6008] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.074647 containerd[2017]: time="2026-04-13T19:25:23.073259706Z" level=info msg="TearDown network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\" successfully" Apr 13 19:25:23.074647 containerd[2017]: time="2026-04-13T19:25:23.073307166Z" level=info msg="StopPodSandbox for \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\" returns successfully" Apr 13 19:25:23.076537 containerd[2017]: time="2026-04-13T19:25:23.076372650Z" level=info msg="RemovePodSandbox for \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\"" Apr 13 19:25:23.076649 containerd[2017]: time="2026-04-13T19:25:23.076554594Z" level=info msg="Forcibly stopping sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\"" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.148 [WARNING][6030] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"31e132ac-0e5d-4c85-a51c-36b4b5148995", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"f282692defcfb22d5327ca0c80f1263c04cfc7eb5d6a985ac0fbc82833b7f210", Pod:"coredns-674b8bbfcf-gtvnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f0c20ed9f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.149 [INFO][6030] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.149 [INFO][6030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" iface="eth0" netns="" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.149 [INFO][6030] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.149 [INFO][6030] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.192 [INFO][6037] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.194 [INFO][6037] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.194 [INFO][6037] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.211 [WARNING][6037] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.211 [INFO][6037] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" HandleID="k8s-pod-network.ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Workload="ip--172--31--27--52-k8s-coredns--674b8bbfcf--gtvnr-eth0" Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.215 [INFO][6037] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.222237 containerd[2017]: 2026-04-13 19:25:23.218 [INFO][6030] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f" Apr 13 19:25:23.223141 containerd[2017]: time="2026-04-13T19:25:23.222336319Z" level=info msg="TearDown network for sandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\" successfully" Apr 13 19:25:23.230850 containerd[2017]: time="2026-04-13T19:25:23.230774287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:23.230977 containerd[2017]: time="2026-04-13T19:25:23.230924911Z" level=info msg="RemovePodSandbox \"ddfd8576b8690a254c4244509f9d456aa285352440cf00434bf8ab1b3d0fed6f\" returns successfully" Apr 13 19:25:23.231852 containerd[2017]: time="2026-04-13T19:25:23.231788563Z" level=info msg="StopPodSandbox for \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\"" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.297 [WARNING][6051] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.297 [INFO][6051] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.297 [INFO][6051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" iface="eth0" netns="" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.297 [INFO][6051] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.297 [INFO][6051] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.339 [INFO][6058] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.339 [INFO][6058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.339 [INFO][6058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.356 [WARNING][6058] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.357 [INFO][6058] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.359 [INFO][6058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.367402 containerd[2017]: 2026-04-13 19:25:23.363 [INFO][6051] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.367402 containerd[2017]: time="2026-04-13T19:25:23.367355876Z" level=info msg="TearDown network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\" successfully" Apr 13 19:25:23.368189 containerd[2017]: time="2026-04-13T19:25:23.367411676Z" level=info msg="StopPodSandbox for \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\" returns successfully" Apr 13 19:25:23.369093 containerd[2017]: time="2026-04-13T19:25:23.369041420Z" level=info msg="RemovePodSandbox for \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\"" Apr 13 19:25:23.369256 containerd[2017]: time="2026-04-13T19:25:23.369099572Z" level=info msg="Forcibly stopping sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\"" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.441 [WARNING][6072] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" WorkloadEndpoint="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.441 [INFO][6072] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.441 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" iface="eth0" netns="" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.441 [INFO][6072] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.441 [INFO][6072] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.483 [INFO][6079] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.483 [INFO][6079] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.483 [INFO][6079] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.501 [WARNING][6079] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.501 [INFO][6079] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" HandleID="k8s-pod-network.b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Workload="ip--172--31--27--52-k8s-whisker--6d6694fbb6--hdzjx-eth0" Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.504 [INFO][6079] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.511155 containerd[2017]: 2026-04-13 19:25:23.507 [INFO][6072] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a" Apr 13 19:25:23.511931 containerd[2017]: time="2026-04-13T19:25:23.511213149Z" level=info msg="TearDown network for sandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\" successfully" Apr 13 19:25:23.518962 containerd[2017]: time="2026-04-13T19:25:23.518889561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:23.519115 containerd[2017]: time="2026-04-13T19:25:23.518992365Z" level=info msg="RemovePodSandbox \"b763ceadcc040cf975d14712284cc444aa5c5df43572888a002f19629443720a\" returns successfully" Apr 13 19:25:23.519868 containerd[2017]: time="2026-04-13T19:25:23.519804405Z" level=info msg="StopPodSandbox for \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\"" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.592 [WARNING][6093] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"1adedd17-1d2f-4205-a531-c8bdcaf6fdc9", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36", Pod:"calico-apiserver-648588977d-fkz57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd66a6f37d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.592 [INFO][6093] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.593 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" iface="eth0" netns="" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.593 [INFO][6093] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.593 [INFO][6093] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.638 [INFO][6100] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.638 [INFO][6100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.638 [INFO][6100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.656 [WARNING][6100] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.657 [INFO][6100] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.663 [INFO][6100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.670655 containerd[2017]: 2026-04-13 19:25:23.667 [INFO][6093] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.672662 containerd[2017]: time="2026-04-13T19:25:23.670703277Z" level=info msg="TearDown network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\" successfully" Apr 13 19:25:23.672662 containerd[2017]: time="2026-04-13T19:25:23.670742145Z" level=info msg="StopPodSandbox for \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\" returns successfully" Apr 13 19:25:23.672662 containerd[2017]: time="2026-04-13T19:25:23.672613365Z" level=info msg="RemovePodSandbox for \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\"" Apr 13 19:25:23.672817 containerd[2017]: time="2026-04-13T19:25:23.672691077Z" level=info msg="Forcibly stopping sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\"" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.744 [WARNING][6114] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0", GenerateName:"calico-apiserver-648588977d-", Namespace:"calico-system", SelfLink:"", UID:"1adedd17-1d2f-4205-a531-c8bdcaf6fdc9", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 19, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"648588977d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-52", ContainerID:"9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36", Pod:"calico-apiserver-648588977d-fkz57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd66a6f37d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.744 [INFO][6114] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.744 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" iface="eth0" netns="" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.744 [INFO][6114] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.744 [INFO][6114] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.788 [INFO][6121] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.788 [INFO][6121] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.788 [INFO][6121] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.803 [WARNING][6121] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.803 [INFO][6121] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" HandleID="k8s-pod-network.620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Workload="ip--172--31--27--52-k8s-calico--apiserver--648588977d--fkz57-eth0" Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.805 [INFO][6121] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 19:25:23.812789 containerd[2017]: 2026-04-13 19:25:23.808 [INFO][6114] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661" Apr 13 19:25:23.812789 containerd[2017]: time="2026-04-13T19:25:23.811893982Z" level=info msg="TearDown network for sandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\" successfully" Apr 13 19:25:23.818413 containerd[2017]: time="2026-04-13T19:25:23.818326330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:25:23.818959 containerd[2017]: time="2026-04-13T19:25:23.818427658Z" level=info msg="RemovePodSandbox \"620c5be1126710a05cd985e7688a97054840c5ca2299f224cadba875ec6fe661\" returns successfully" Apr 13 19:25:25.413667 containerd[2017]: time="2026-04-13T19:25:25.413591626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:25.416691 containerd[2017]: time="2026-04-13T19:25:25.416613106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 19:25:25.423164 containerd[2017]: time="2026-04-13T19:25:25.423078310Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:25.431888 containerd[2017]: time="2026-04-13T19:25:25.431810614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:25.437705 containerd[2017]: time="2026-04-13T19:25:25.437623978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 5.066492185s" Apr 13 19:25:25.437705 containerd[2017]: time="2026-04-13T19:25:25.437698522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:25:25.442139 containerd[2017]: time="2026-04-13T19:25:25.441798898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 19:25:25.451270 containerd[2017]: time="2026-04-13T19:25:25.451178254Z" level=info msg="CreateContainer within sandbox \"9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:25:25.479282 containerd[2017]: time="2026-04-13T19:25:25.479229046Z" level=info msg="CreateContainer within sandbox \"9420f0dcd871fd253b96ae577bb43073b15d2ab7a737c9a47c94f60102cbda36\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0a75950744737266f427ffe524e18363acb80f753a7e8e331ce79032263be74a\"" Apr 13 19:25:25.482789 containerd[2017]: time="2026-04-13T19:25:25.481250758Z" level=info msg="StartContainer for \"0a75950744737266f427ffe524e18363acb80f753a7e8e331ce79032263be74a\"" Apr 13 19:25:25.552854 systemd[1]: run-containerd-runc-k8s.io-0a75950744737266f427ffe524e18363acb80f753a7e8e331ce79032263be74a-runc.gKes6P.mount: Deactivated successfully. Apr 13 19:25:25.566824 systemd[1]: Started cri-containerd-0a75950744737266f427ffe524e18363acb80f753a7e8e331ce79032263be74a.scope - libcontainer container 0a75950744737266f427ffe524e18363acb80f753a7e8e331ce79032263be74a. Apr 13 19:25:25.661958 containerd[2017]: time="2026-04-13T19:25:25.661876991Z" level=info msg="StartContainer for \"0a75950744737266f427ffe524e18363acb80f753a7e8e331ce79032263be74a\" returns successfully" Apr 13 19:25:25.727620 systemd[1]: Started sshd@7-172.31.27.52:22-4.175.71.9:58092.service - OpenSSH per-connection server daemon (4.175.71.9:58092). Apr 13 19:25:25.908296 kubelet[3431]: I0413 19:25:25.906490 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-b6rr7" podStartSLOduration=33.902842783 podStartE2EDuration="39.906432145s" podCreationTimestamp="2026-04-13 19:24:46 +0000 UTC" firstStartedPulling="2026-04-13 19:25:14.364499411 +0000 UTC m=+53.518438131" lastFinishedPulling="2026-04-13 19:25:20.368088797 +0000 UTC m=+59.522027493" observedRunningTime="2026-04-13 19:25:20.862905463 +0000 UTC m=+60.016844183" watchObservedRunningTime="2026-04-13 19:25:25.906432145 +0000 UTC m=+65.060370853" Apr 13 19:25:26.715567 sshd[6181]: Accepted publickey for core from 4.175.71.9 port 58092 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:26.726703 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:26.744763 systemd-logind[1997]: New session 8 of user core. Apr 13 19:25:26.752743 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:25:26.908289 kubelet[3431]: I0413 19:25:26.908204 3431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:27.634322 sshd[6181]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:27.643498 systemd[1]: sshd@7-172.31.27.52:22-4.175.71.9:58092.service: Deactivated successfully. Apr 13 19:25:27.652885 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:25:27.655378 systemd-logind[1997]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:25:27.660779 systemd-logind[1997]: Removed session 8. Apr 13 19:25:29.300773 containerd[2017]: time="2026-04-13T19:25:29.300692269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:29.303199 containerd[2017]: time="2026-04-13T19:25:29.302695705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 19:25:29.305380 containerd[2017]: time="2026-04-13T19:25:29.305239141Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:29.320059 containerd[2017]: time="2026-04-13T19:25:29.319994701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:29.322115 containerd[2017]: time="2026-04-13T19:25:29.321913633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.880051135s" Apr 13 19:25:29.322115 containerd[2017]: time="2026-04-13T19:25:29.321997393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 19:25:29.325899 containerd[2017]: time="2026-04-13T19:25:29.325773457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 19:25:29.369313 containerd[2017]: time="2026-04-13T19:25:29.369253790Z" level=info msg="CreateContainer within sandbox \"49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 19:25:29.398980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453851633.mount: Deactivated successfully. Apr 13 19:25:29.406547 containerd[2017]: time="2026-04-13T19:25:29.404835626Z" level=info msg="CreateContainer within sandbox \"49fcc0fa00365b124e163ad8ea902ce952ccdc6f8fd082d2cb1693a2fbd2018e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1f7010dd6162c17f969e1404b14c46ace03059effb8bf7c769db853312cad54c\"" Apr 13 19:25:29.407217 containerd[2017]: time="2026-04-13T19:25:29.407156378Z" level=info msg="StartContainer for \"1f7010dd6162c17f969e1404b14c46ace03059effb8bf7c769db853312cad54c\"" Apr 13 19:25:29.482765 systemd[1]: Started cri-containerd-1f7010dd6162c17f969e1404b14c46ace03059effb8bf7c769db853312cad54c.scope - libcontainer container 1f7010dd6162c17f969e1404b14c46ace03059effb8bf7c769db853312cad54c. Apr 13 19:25:29.556050 containerd[2017]: time="2026-04-13T19:25:29.555096975Z" level=info msg="StartContainer for \"1f7010dd6162c17f969e1404b14c46ace03059effb8bf7c769db853312cad54c\" returns successfully" Apr 13 19:25:29.731626 containerd[2017]: time="2026-04-13T19:25:29.728773527Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:29.733716 containerd[2017]: time="2026-04-13T19:25:29.733662004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 19:25:29.740502 containerd[2017]: time="2026-04-13T19:25:29.740395180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 414.231999ms" Apr 13 19:25:29.740769 containerd[2017]: time="2026-04-13T19:25:29.740704252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 19:25:29.743426 containerd[2017]: time="2026-04-13T19:25:29.743265676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 19:25:29.756568 containerd[2017]: time="2026-04-13T19:25:29.755818024Z" level=info msg="CreateContainer within sandbox \"0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 19:25:29.795586 containerd[2017]: time="2026-04-13T19:25:29.795319204Z" level=info msg="CreateContainer within sandbox \"0b8f8b93306dc511ba7d2ecdea40c35ffd844fd29bc6e34860974bb08c7bc1ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0766756010b9a39e9f7c5aa99d108332264ee0e90f6128252671b5f6e395559\"" Apr 13 19:25:29.797775 containerd[2017]: time="2026-04-13T19:25:29.797692840Z" level=info msg="StartContainer for \"c0766756010b9a39e9f7c5aa99d108332264ee0e90f6128252671b5f6e395559\"" Apr 13 19:25:29.873735 systemd[1]: Started cri-containerd-c0766756010b9a39e9f7c5aa99d108332264ee0e90f6128252671b5f6e395559.scope - libcontainer container c0766756010b9a39e9f7c5aa99d108332264ee0e90f6128252671b5f6e395559. Apr 13 19:25:29.940607 kubelet[3431]: I0413 19:25:29.939779 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-648588977d-fkz57" podStartSLOduration=33.247155016 podStartE2EDuration="43.939423137s" podCreationTimestamp="2026-04-13 19:24:46 +0000 UTC" firstStartedPulling="2026-04-13 19:25:14.747419125 +0000 UTC m=+53.901357833" lastFinishedPulling="2026-04-13 19:25:25.439687234 +0000 UTC m=+64.593625954" observedRunningTime="2026-04-13 19:25:25.909567853 +0000 UTC m=+65.063506597" watchObservedRunningTime="2026-04-13 19:25:29.939423137 +0000 UTC m=+69.093361857" Apr 13 19:25:30.015533 containerd[2017]: time="2026-04-13T19:25:30.013045045Z" level=info msg="StartContainer for \"c0766756010b9a39e9f7c5aa99d108332264ee0e90f6128252671b5f6e395559\" returns successfully" Apr 13 19:25:30.053292 kubelet[3431]: I0413 19:25:30.053215 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dfbd8bf44-pjxdt" podStartSLOduration=26.874567743 podStartE2EDuration="41.053194489s" podCreationTimestamp="2026-04-13 19:24:49 +0000 UTC" firstStartedPulling="2026-04-13 19:25:15.145592423 +0000 UTC m=+54.299531143" lastFinishedPulling="2026-04-13 19:25:29.324219193 +0000 UTC m=+68.478157889" observedRunningTime="2026-04-13 19:25:29.937355801 +0000 UTC m=+69.091294533" watchObservedRunningTime="2026-04-13 19:25:30.053194489 +0000 UTC m=+69.207133209" Apr 13 19:25:30.955534 kubelet[3431]: I0413 19:25:30.955262 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-648588977d-qbzbd" podStartSLOduration=30.414680209 podStartE2EDuration="44.955237242s" podCreationTimestamp="2026-04-13 19:24:46 +0000 UTC" firstStartedPulling="2026-04-13 19:25:15.201786023 +0000 UTC m=+54.355724731" lastFinishedPulling="2026-04-13 19:25:29.742342984 +0000 UTC m=+68.896281764" observedRunningTime="2026-04-13 19:25:30.954182478 +0000 UTC m=+70.108121294" watchObservedRunningTime="2026-04-13 19:25:30.955237242 +0000 UTC m=+70.109175938" Apr 13 19:25:31.366353 containerd[2017]: time="2026-04-13T19:25:31.364552888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:31.367146 containerd[2017]: time="2026-04-13T19:25:31.367082620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 19:25:31.370956 containerd[2017]: time="2026-04-13T19:25:31.370861756Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:31.386073 containerd[2017]: time="2026-04-13T19:25:31.385968040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:31.388816 containerd[2017]: time="2026-04-13T19:25:31.388717660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.645376204s" Apr 13 19:25:31.389304 containerd[2017]: time="2026-04-13T19:25:31.389059252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 19:25:31.392167 containerd[2017]: time="2026-04-13T19:25:31.391750156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 19:25:31.397511 containerd[2017]: time="2026-04-13T19:25:31.397406500Z" level=info msg="CreateContainer within sandbox \"e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 19:25:31.444255 containerd[2017]: time="2026-04-13T19:25:31.442623796Z" level=info msg="CreateContainer within sandbox \"e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"20502908d96aa3d31243a53d0683922ccaee2f01e5b4eef47ccfe2d28bd10938\"" Apr 13 19:25:31.450489 containerd[2017]: time="2026-04-13T19:25:31.448337596Z" level=info msg="StartContainer for \"20502908d96aa3d31243a53d0683922ccaee2f01e5b4eef47ccfe2d28bd10938\"" Apr 13 19:25:31.551826 systemd[1]: Started cri-containerd-20502908d96aa3d31243a53d0683922ccaee2f01e5b4eef47ccfe2d28bd10938.scope - libcontainer container 20502908d96aa3d31243a53d0683922ccaee2f01e5b4eef47ccfe2d28bd10938. Apr 13 19:25:31.707641 containerd[2017]: time="2026-04-13T19:25:31.707563913Z" level=info msg="StartContainer for \"20502908d96aa3d31243a53d0683922ccaee2f01e5b4eef47ccfe2d28bd10938\" returns successfully" Apr 13 19:25:31.938773 kubelet[3431]: I0413 19:25:31.938157 3431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:32.830023 systemd[1]: Started sshd@8-172.31.27.52:22-4.175.71.9:58106.service - OpenSSH per-connection server daemon (4.175.71.9:58106). Apr 13 19:25:33.391991 containerd[2017]: time="2026-04-13T19:25:33.391933014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:33.394388 containerd[2017]: time="2026-04-13T19:25:33.394328634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 19:25:33.396805 containerd[2017]: time="2026-04-13T19:25:33.396718734Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:33.406281 containerd[2017]: time="2026-04-13T19:25:33.406194858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:33.409651 containerd[2017]: time="2026-04-13T19:25:33.409441242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.01761953s" Apr 13 19:25:33.409651 containerd[2017]: time="2026-04-13T19:25:33.409527018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 19:25:33.412268 containerd[2017]: time="2026-04-13T19:25:33.411738678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 19:25:33.419910 containerd[2017]: time="2026-04-13T19:25:33.419836374Z" level=info msg="CreateContainer within sandbox \"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 19:25:33.460202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213622283.mount: Deactivated successfully. Apr 13 19:25:33.467119 containerd[2017]: time="2026-04-13T19:25:33.467064378Z" level=info msg="CreateContainer within sandbox \"fddbd3a816e1a1f11bf5e26e8acb460bf2366490e814fae27e4364c61c0bc28e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"41eb87b621234177299e016bb39ba894753cbd1afd8b82945dc1fb019a2001af\"" Apr 13 19:25:33.468073 containerd[2017]: time="2026-04-13T19:25:33.468023634Z" level=info msg="StartContainer for \"41eb87b621234177299e016bb39ba894753cbd1afd8b82945dc1fb019a2001af\"" Apr 13 19:25:33.542791 systemd[1]: Started cri-containerd-41eb87b621234177299e016bb39ba894753cbd1afd8b82945dc1fb019a2001af.scope - libcontainer container 41eb87b621234177299e016bb39ba894753cbd1afd8b82945dc1fb019a2001af. Apr 13 19:25:33.598749 containerd[2017]: time="2026-04-13T19:25:33.598659151Z" level=info msg="StartContainer for \"41eb87b621234177299e016bb39ba894753cbd1afd8b82945dc1fb019a2001af\" returns successfully" Apr 13 19:25:33.878258 sshd[6373]: Accepted publickey for core from 4.175.71.9 port 58106 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:33.880802 sshd[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:33.894210 systemd-logind[1997]: New session 9 of user core. Apr 13 19:25:33.901730 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:25:33.992112 kubelet[3431]: I0413 19:25:33.990130 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hrfts" podStartSLOduration=25.522515796 podStartE2EDuration="44.990107637s" podCreationTimestamp="2026-04-13 19:24:49 +0000 UTC" firstStartedPulling="2026-04-13 19:25:13.943805437 +0000 UTC m=+53.097744133" lastFinishedPulling="2026-04-13 19:25:33.411397254 +0000 UTC m=+72.565335974" observedRunningTime="2026-04-13 19:25:33.986367225 +0000 UTC m=+73.140305945" watchObservedRunningTime="2026-04-13 19:25:33.990107637 +0000 UTC m=+73.144046345" Apr 13 19:25:34.278904 kubelet[3431]: I0413 19:25:34.278700 3431 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 19:25:34.278904 kubelet[3431]: I0413 19:25:34.278772 3431 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 19:25:34.752402 sshd[6373]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:34.764009 systemd[1]: sshd@8-172.31.27.52:22-4.175.71.9:58106.service: Deactivated successfully. Apr 13 19:25:34.770595 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:25:34.772829 systemd-logind[1997]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:25:34.776635 systemd-logind[1997]: Removed session 9. Apr 13 19:25:35.074973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080735375.mount: Deactivated successfully. Apr 13 19:25:35.113515 containerd[2017]: time="2026-04-13T19:25:35.112794834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:35.115765 containerd[2017]: time="2026-04-13T19:25:35.115257426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 19:25:35.118298 containerd[2017]: time="2026-04-13T19:25:35.117890022Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:35.126782 containerd[2017]: time="2026-04-13T19:25:35.126695154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:35.129147 containerd[2017]: time="2026-04-13T19:25:35.128789322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.716989156s" Apr 13 19:25:35.129147 containerd[2017]: time="2026-04-13T19:25:35.128852826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 19:25:35.138597 containerd[2017]: time="2026-04-13T19:25:35.138427758Z" level=info msg="CreateContainer within sandbox \"e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 19:25:35.169646 containerd[2017]: time="2026-04-13T19:25:35.169408087Z" level=info msg="CreateContainer within sandbox \"e49592790e132237dd2cfd55cd048281177c7b4bdb8944067f81e41b8a0b9862\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cae40919f42c7b0f4452590d0b1226ff66e032c7b90b988b663597b0302d7717\"" Apr 13 19:25:35.172547 containerd[2017]: time="2026-04-13T19:25:35.171231907Z" level=info msg="StartContainer for \"cae40919f42c7b0f4452590d0b1226ff66e032c7b90b988b663597b0302d7717\"" Apr 13 19:25:35.243792 systemd[1]: Started cri-containerd-cae40919f42c7b0f4452590d0b1226ff66e032c7b90b988b663597b0302d7717.scope - libcontainer container cae40919f42c7b0f4452590d0b1226ff66e032c7b90b988b663597b0302d7717. Apr 13 19:25:35.313986 containerd[2017]: time="2026-04-13T19:25:35.313917823Z" level=info msg="StartContainer for \"cae40919f42c7b0f4452590d0b1226ff66e032c7b90b988b663597b0302d7717\" returns successfully" Apr 13 19:25:39.923019 systemd[1]: Started sshd@9-172.31.27.52:22-4.175.71.9:45280.service - OpenSSH per-connection server daemon (4.175.71.9:45280). Apr 13 19:25:40.602099 kubelet[3431]: I0413 19:25:40.601662 3431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:40.658425 kubelet[3431]: I0413 19:25:40.658320 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-775444d9b7-dqwmj" podStartSLOduration=9.020402153 podStartE2EDuration="28.658297478s" podCreationTimestamp="2026-04-13 19:25:12 +0000 UTC" firstStartedPulling="2026-04-13 19:25:15.493271017 +0000 UTC m=+54.647209737" lastFinishedPulling="2026-04-13 19:25:35.131166354 +0000 UTC m=+74.285105062" observedRunningTime="2026-04-13 19:25:35.987755711 +0000 UTC m=+75.141694431" watchObservedRunningTime="2026-04-13 19:25:40.658297478 +0000 UTC m=+79.812236198" Apr 13 19:25:40.888124 sshd[6484]: Accepted publickey for core from 4.175.71.9 port 45280 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:40.890924 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:40.899009 systemd-logind[1997]: New session 10 of user core. Apr 13 19:25:40.908758 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:25:41.696299 sshd[6484]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:41.725049 systemd[1]: sshd@9-172.31.27.52:22-4.175.71.9:45280.service: Deactivated successfully. Apr 13 19:25:41.732926 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:25:41.738477 systemd-logind[1997]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:25:41.746897 systemd-logind[1997]: Removed session 10. Apr 13 19:25:46.898701 systemd[1]: Started sshd@10-172.31.27.52:22-4.175.71.9:51038.service - OpenSSH per-connection server daemon (4.175.71.9:51038). Apr 13 19:25:47.923611 sshd[6520]: Accepted publickey for core from 4.175.71.9 port 51038 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:47.927193 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:47.935903 systemd-logind[1997]: New session 11 of user core. Apr 13 19:25:47.943704 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:25:48.783611 sshd[6520]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:48.792055 systemd[1]: sshd@10-172.31.27.52:22-4.175.71.9:51038.service: Deactivated successfully. Apr 13 19:25:48.800928 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:25:48.805637 systemd-logind[1997]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:25:48.808817 systemd-logind[1997]: Removed session 11. Apr 13 19:25:48.968659 systemd[1]: Started sshd@11-172.31.27.52:22-4.175.71.9:51046.service - OpenSSH per-connection server daemon (4.175.71.9:51046). Apr 13 19:25:49.968257 sshd[6550]: Accepted publickey for core from 4.175.71.9 port 51046 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:49.973398 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:49.986252 systemd-logind[1997]: New session 12 of user core. Apr 13 19:25:49.992813 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:25:50.888282 sshd[6550]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:50.895873 systemd-logind[1997]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:25:50.896807 systemd[1]: sshd@11-172.31.27.52:22-4.175.71.9:51046.service: Deactivated successfully. Apr 13 19:25:50.901915 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:25:50.903998 systemd-logind[1997]: Removed session 12. Apr 13 19:25:51.064025 systemd[1]: Started sshd@12-172.31.27.52:22-4.175.71.9:51054.service - OpenSSH per-connection server daemon (4.175.71.9:51054). Apr 13 19:25:52.056568 sshd[6565]: Accepted publickey for core from 4.175.71.9 port 51054 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:52.058664 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:52.066834 systemd-logind[1997]: New session 13 of user core. Apr 13 19:25:52.077761 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:25:52.580583 kubelet[3431]: I0413 19:25:52.578323 3431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 19:25:52.926789 sshd[6565]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:52.934173 systemd[1]: sshd@12-172.31.27.52:22-4.175.71.9:51054.service: Deactivated successfully. Apr 13 19:25:52.940140 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:25:52.943568 systemd-logind[1997]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:25:52.946940 systemd-logind[1997]: Removed session 13. Apr 13 19:25:58.122655 systemd[1]: Started sshd@13-172.31.27.52:22-4.175.71.9:46530.service - OpenSSH per-connection server daemon (4.175.71.9:46530). Apr 13 19:25:59.147825 sshd[6614]: Accepted publickey for core from 4.175.71.9 port 46530 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:59.151373 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:59.159742 systemd-logind[1997]: New session 14 of user core. Apr 13 19:25:59.168721 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:25:59.969328 sshd[6614]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:59.976896 systemd[1]: sshd@13-172.31.27.52:22-4.175.71.9:46530.service: Deactivated successfully. Apr 13 19:25:59.983053 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:25:59.986144 systemd-logind[1997]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:25:59.989756 systemd-logind[1997]: Removed session 14. Apr 13 19:26:00.138106 systemd[1]: Started sshd@14-172.31.27.52:22-4.175.71.9:46542.service - OpenSSH per-connection server daemon (4.175.71.9:46542). Apr 13 19:26:01.096301 sshd[6647]: Accepted publickey for core from 4.175.71.9 port 46542 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:01.099062 sshd[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:01.109287 systemd-logind[1997]: New session 15 of user core. Apr 13 19:26:01.116743 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:02.231091 sshd[6647]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:02.238534 systemd[1]: sshd@14-172.31.27.52:22-4.175.71.9:46542.service: Deactivated successfully. Apr 13 19:26:02.244548 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:02.246323 systemd-logind[1997]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:02.249440 systemd-logind[1997]: Removed session 15. Apr 13 19:26:02.401982 systemd[1]: Started sshd@15-172.31.27.52:22-4.175.71.9:46548.service - OpenSSH per-connection server daemon (4.175.71.9:46548). Apr 13 19:26:03.379586 sshd[6658]: Accepted publickey for core from 4.175.71.9 port 46548 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:03.382093 sshd[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:03.392211 systemd-logind[1997]: New session 16 of user core. Apr 13 19:26:03.400733 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:26:05.006023 sshd[6658]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:05.013867 systemd[1]: sshd@15-172.31.27.52:22-4.175.71.9:46548.service: Deactivated successfully. Apr 13 19:26:05.019880 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:26:05.023069 systemd-logind[1997]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:26:05.025717 systemd-logind[1997]: Removed session 16. Apr 13 19:26:05.194038 systemd[1]: Started sshd@16-172.31.27.52:22-4.175.71.9:46558.service - OpenSSH per-connection server daemon (4.175.71.9:46558). Apr 13 19:26:06.233497 sshd[6707]: Accepted publickey for core from 4.175.71.9 port 46558 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:06.235581 sshd[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:06.243310 systemd-logind[1997]: New session 17 of user core. Apr 13 19:26:06.257742 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:26:07.329509 sshd[6707]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:07.336176 systemd[1]: sshd@16-172.31.27.52:22-4.175.71.9:46558.service: Deactivated successfully. Apr 13 19:26:07.340413 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:26:07.345415 systemd-logind[1997]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:26:07.347570 systemd-logind[1997]: Removed session 17. Apr 13 19:26:07.512962 systemd[1]: Started sshd@17-172.31.27.52:22-4.175.71.9:40146.service - OpenSSH per-connection server daemon (4.175.71.9:40146). Apr 13 19:26:08.537970 sshd[6718]: Accepted publickey for core from 4.175.71.9 port 40146 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:08.541423 sshd[6718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:08.549267 systemd-logind[1997]: New session 18 of user core. Apr 13 19:26:08.567756 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:26:09.368820 sshd[6718]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:09.373972 systemd[1]: sshd@17-172.31.27.52:22-4.175.71.9:40146.service: Deactivated successfully. Apr 13 19:26:09.379142 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:26:09.383414 systemd-logind[1997]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:26:09.386151 systemd-logind[1997]: Removed session 18. Apr 13 19:26:14.552977 systemd[1]: Started sshd@18-172.31.27.52:22-4.175.71.9:40154.service - OpenSSH per-connection server daemon (4.175.71.9:40154). Apr 13 19:26:15.590508 sshd[6755]: Accepted publickey for core from 4.175.71.9 port 40154 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:15.593147 sshd[6755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:15.601645 systemd-logind[1997]: New session 19 of user core. Apr 13 19:26:15.606734 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:26:16.456722 sshd[6755]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:16.463243 systemd-logind[1997]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:26:16.465938 systemd[1]: sshd@18-172.31.27.52:22-4.175.71.9:40154.service: Deactivated successfully. Apr 13 19:26:16.470378 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:26:16.473060 systemd-logind[1997]: Removed session 19. Apr 13 19:26:21.647988 systemd[1]: Started sshd@19-172.31.27.52:22-4.175.71.9:60620.service - OpenSSH per-connection server daemon (4.175.71.9:60620). Apr 13 19:26:22.694136 sshd[6791]: Accepted publickey for core from 4.175.71.9 port 60620 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:22.697654 sshd[6791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:22.705367 systemd-logind[1997]: New session 20 of user core. Apr 13 19:26:22.712745 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:26:23.530848 sshd[6791]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:23.538423 systemd[1]: sshd@19-172.31.27.52:22-4.175.71.9:60620.service: Deactivated successfully. Apr 13 19:26:23.544381 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:26:23.546347 systemd-logind[1997]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:26:23.548153 systemd-logind[1997]: Removed session 20. Apr 13 19:26:28.702975 systemd[1]: Started sshd@20-172.31.27.52:22-4.175.71.9:53436.service - OpenSSH per-connection server daemon (4.175.71.9:53436). Apr 13 19:26:29.735608 sshd[6825]: Accepted publickey for core from 4.175.71.9 port 53436 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:29.737284 sshd[6825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:29.751777 systemd-logind[1997]: New session 21 of user core. Apr 13 19:26:29.756790 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:26:30.580058 sshd[6825]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:30.588858 systemd-logind[1997]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:26:30.589677 systemd[1]: sshd@20-172.31.27.52:22-4.175.71.9:53436.service: Deactivated successfully. Apr 13 19:26:30.595500 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:26:30.597966 systemd-logind[1997]: Removed session 21. Apr 13 19:26:35.777999 systemd[1]: Started sshd@21-172.31.27.52:22-4.175.71.9:46860.service - OpenSSH per-connection server daemon (4.175.71.9:46860). Apr 13 19:26:36.818500 sshd[6858]: Accepted publickey for core from 4.175.71.9 port 46860 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:36.821223 sshd[6858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:36.828734 systemd-logind[1997]: New session 22 of user core. Apr 13 19:26:36.838794 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:26:37.647644 sshd[6858]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:37.653656 systemd-logind[1997]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:26:37.654360 systemd[1]: sshd@21-172.31.27.52:22-4.175.71.9:46860.service: Deactivated successfully. Apr 13 19:26:37.660509 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:26:37.665096 systemd-logind[1997]: Removed session 22. Apr 13 19:26:51.912938 systemd[1]: cri-containerd-4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd.scope: Deactivated successfully. Apr 13 19:26:51.914731 systemd[1]: cri-containerd-4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd.scope: Consumed 20.780s CPU time. Apr 13 19:26:51.964488 containerd[2017]: time="2026-04-13T19:26:51.961583196Z" level=info msg="shim disconnected" id=4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd namespace=k8s.io Apr 13 19:26:51.964488 containerd[2017]: time="2026-04-13T19:26:51.962582160Z" level=warning msg="cleaning up after shim disconnected" id=4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd namespace=k8s.io Apr 13 19:26:51.964488 containerd[2017]: time="2026-04-13T19:26:51.962619360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:51.970563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd-rootfs.mount: Deactivated successfully. Apr 13 19:26:52.191401 kubelet[3431]: I0413 19:26:52.191246 3431 scope.go:117] "RemoveContainer" containerID="4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd" Apr 13 19:26:52.198786 containerd[2017]: time="2026-04-13T19:26:52.198316005Z" level=info msg="CreateContainer within sandbox \"492cc0b4dbf7f13661530ac750adf45dcc55c3e27e041cb4d60325b7af7474e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 19:26:52.233855 containerd[2017]: time="2026-04-13T19:26:52.233793921Z" level=info msg="CreateContainer within sandbox \"492cc0b4dbf7f13661530ac750adf45dcc55c3e27e041cb4d60325b7af7474e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1\"" Apr 13 19:26:52.234998 containerd[2017]: time="2026-04-13T19:26:52.234832893Z" level=info msg="StartContainer for \"d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1\"" Apr 13 19:26:52.236642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932519896.mount: Deactivated successfully. Apr 13 19:26:52.292009 systemd[1]: Started cri-containerd-d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1.scope - libcontainer container d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1. Apr 13 19:26:52.341497 containerd[2017]: time="2026-04-13T19:26:52.341405422Z" level=info msg="StartContainer for \"d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1\" returns successfully" Apr 13 19:26:52.875902 systemd[1]: cri-containerd-a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06.scope: Deactivated successfully. Apr 13 19:26:52.876376 systemd[1]: cri-containerd-a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06.scope: Consumed 7.017s CPU time, 17.7M memory peak, 0B memory swap peak. Apr 13 19:26:52.927314 containerd[2017]: time="2026-04-13T19:26:52.927189817Z" level=info msg="shim disconnected" id=a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06 namespace=k8s.io Apr 13 19:26:52.927314 containerd[2017]: time="2026-04-13T19:26:52.927274909Z" level=warning msg="cleaning up after shim disconnected" id=a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06 namespace=k8s.io Apr 13 19:26:52.927314 containerd[2017]: time="2026-04-13T19:26:52.927297409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:52.931903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06-rootfs.mount: Deactivated successfully. Apr 13 19:26:53.205400 kubelet[3431]: I0413 19:26:53.204961 3431 scope.go:117] "RemoveContainer" containerID="a32958b732a3222f9aef994eee7f165cd3f291d5a226d2e43f487e5360187d06" Apr 13 19:26:53.216472 containerd[2017]: time="2026-04-13T19:26:53.216314206Z" level=info msg="CreateContainer within sandbox \"de81d00518230d9badff26a7b0c313d60aa13a1d47831b6b1d98aa9295f94927\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:26:53.250192 containerd[2017]: time="2026-04-13T19:26:53.249532690Z" level=info msg="CreateContainer within sandbox \"de81d00518230d9badff26a7b0c313d60aa13a1d47831b6b1d98aa9295f94927\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"308e0b87abc7341f930d54010c99f02ab76c0e961c7524a808979d99e38bc97c\"" Apr 13 19:26:53.251984 containerd[2017]: time="2026-04-13T19:26:53.250711570Z" level=info msg="StartContainer for \"308e0b87abc7341f930d54010c99f02ab76c0e961c7524a808979d99e38bc97c\"" Apr 13 19:26:53.302806 systemd[1]: Started cri-containerd-308e0b87abc7341f930d54010c99f02ab76c0e961c7524a808979d99e38bc97c.scope - libcontainer container 308e0b87abc7341f930d54010c99f02ab76c0e961c7524a808979d99e38bc97c. Apr 13 19:26:53.380374 containerd[2017]: time="2026-04-13T19:26:53.379307807Z" level=info msg="StartContainer for \"308e0b87abc7341f930d54010c99f02ab76c0e961c7524a808979d99e38bc97c\" returns successfully" Apr 13 19:26:53.568850 kubelet[3431]: E0413 19:26:53.568325 3431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-52?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 19:26:58.137242 systemd[1]: cri-containerd-592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd.scope: Deactivated successfully. Apr 13 19:26:58.138436 systemd[1]: cri-containerd-592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd.scope: Consumed 4.580s CPU time, 16.0M memory peak, 0B memory swap peak. Apr 13 19:26:58.182782 containerd[2017]: time="2026-04-13T19:26:58.182686635Z" level=info msg="shim disconnected" id=592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd namespace=k8s.io Apr 13 19:26:58.187480 containerd[2017]: time="2026-04-13T19:26:58.184223979Z" level=warning msg="cleaning up after shim disconnected" id=592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd namespace=k8s.io Apr 13 19:26:58.187480 containerd[2017]: time="2026-04-13T19:26:58.184268775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:58.188629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd-rootfs.mount: Deactivated successfully. Apr 13 19:26:59.230238 kubelet[3431]: I0413 19:26:59.229417 3431 scope.go:117] "RemoveContainer" containerID="592c24bfe701d28553d4d0e9d3ab9bd6ab404a38adca3160a0ea92e9d9207bdd" Apr 13 19:26:59.233535 containerd[2017]: time="2026-04-13T19:26:59.233299216Z" level=info msg="CreateContainer within sandbox \"2e4590f5b2123645612a8c4bbc4e17640fc25c5fc0d42c60271956f039771e32\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 19:26:59.264526 containerd[2017]: time="2026-04-13T19:26:59.264014716Z" level=info msg="CreateContainer within sandbox \"2e4590f5b2123645612a8c4bbc4e17640fc25c5fc0d42c60271956f039771e32\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5f333f7a7941912abb11ef88f2e9f8bf711705763cf4608f54a9f3d787d73d47\"" Apr 13 19:26:59.265364 containerd[2017]: time="2026-04-13T19:26:59.264746332Z" level=info msg="StartContainer for \"5f333f7a7941912abb11ef88f2e9f8bf711705763cf4608f54a9f3d787d73d47\"" Apr 13 19:26:59.332805 systemd[1]: Started cri-containerd-5f333f7a7941912abb11ef88f2e9f8bf711705763cf4608f54a9f3d787d73d47.scope - libcontainer container 5f333f7a7941912abb11ef88f2e9f8bf711705763cf4608f54a9f3d787d73d47. Apr 13 19:26:59.400114 containerd[2017]: time="2026-04-13T19:26:59.399641897Z" level=info msg="StartContainer for \"5f333f7a7941912abb11ef88f2e9f8bf711705763cf4608f54a9f3d787d73d47\" returns successfully" Apr 13 19:27:03.569694 kubelet[3431]: E0413 19:27:03.569610 3431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-52?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 19:27:03.970603 systemd[1]: cri-containerd-d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1.scope: Deactivated successfully. Apr 13 19:27:04.011379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1-rootfs.mount: Deactivated successfully. Apr 13 19:27:04.024353 containerd[2017]: time="2026-04-13T19:27:04.024029048Z" level=info msg="shim disconnected" id=d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1 namespace=k8s.io Apr 13 19:27:04.024353 containerd[2017]: time="2026-04-13T19:27:04.024104456Z" level=warning msg="cleaning up after shim disconnected" id=d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1 namespace=k8s.io Apr 13 19:27:04.024353 containerd[2017]: time="2026-04-13T19:27:04.024126620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:04.253508 kubelet[3431]: I0413 19:27:04.253040 3431 scope.go:117] "RemoveContainer" containerID="4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd" Apr 13 19:27:04.255247 kubelet[3431]: I0413 19:27:04.254519 3431 scope.go:117] "RemoveContainer" containerID="d7447824ab1d2eb9309a4cda15dbeee5d0224b23cb6106646764c85db62a8bb1" Apr 13 19:27:04.255247 kubelet[3431]: E0413 19:27:04.254774 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-lh6z7_tigera-operator(b308fb24-da5f-448a-b4a8-f1cbfc170e84)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-lh6z7" podUID="b308fb24-da5f-448a-b4a8-f1cbfc170e84" Apr 13 19:27:04.256910 containerd[2017]: time="2026-04-13T19:27:04.256859145Z" level=info msg="RemoveContainer for \"4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd\"" Apr 13 19:27:04.264144 containerd[2017]: time="2026-04-13T19:27:04.264064185Z" level=info msg="RemoveContainer for \"4e5f59ba623c415ee0ecaef6497717f61cdf87e8a897c2096c830dfc551dbffd\" returns successfully"